Jan 28 15:17:26 crc systemd[1]: Starting Kubernetes Kubelet... Jan 28 15:17:26 crc restorecon[4756]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 28 15:17:26 crc restorecon[4756]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:17:26 crc restorecon[4756]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:17:26 crc restorecon[4756]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:17:26 crc restorecon[4756]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:17:26 crc restorecon[4756]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:17:26 crc restorecon[4756]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:17:26 crc restorecon[4756]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:17:26 crc restorecon[4756]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 15:17:26 crc restorecon[4756]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:17:26 crc restorecon[4756]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 28 15:17:26 crc restorecon[4756]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 15:17:26 crc restorecon[4756]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:17:26 crc restorecon[4756]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:17:26 crc restorecon[4756]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:17:26 crc restorecon[4756]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:17:26 crc restorecon[4756]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 28 15:17:26 crc restorecon[4756]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 15:17:26 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:17:26 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:17:26 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:17:26 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:17:26 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:17:27 crc restorecon[4756]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:17:27 crc restorecon[4756]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 28 15:17:28 crc kubenswrapper[4871]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 15:17:28 crc kubenswrapper[4871]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 28 15:17:28 crc kubenswrapper[4871]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 15:17:28 crc kubenswrapper[4871]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 15:17:28 crc kubenswrapper[4871]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 28 15:17:28 crc kubenswrapper[4871]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.629556 4871 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635107 4871 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635167 4871 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635179 4871 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635188 4871 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635198 4871 feature_gate.go:330] unrecognized feature gate: Example Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635207 4871 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635215 4871 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635224 4871 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635233 4871 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635241 4871 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635250 4871 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635258 4871 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635266 4871 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635274 4871 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635282 4871 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635290 4871 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635297 4871 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635306 4871 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635314 4871 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635322 4871 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635334 4871 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635345 4871 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635353 4871 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635362 4871 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635370 4871 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635385 4871 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635393 4871 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635401 4871 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635409 4871 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635417 4871 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635424 4871 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635432 4871 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635440 4871 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635450 4871 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635457 4871 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635465 4871 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635475 4871 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635483 4871 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635491 4871 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635499 4871 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635506 4871 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635514 4871 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635524 4871 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635534 4871 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635543 4871 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635552 4871 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635560 4871 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635569 4871 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635580 4871 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635612 4871 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635620 4871 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635627 4871 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635635 4871 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635642 4871 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635651 4871 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635658 4871 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635666 4871 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635676 4871 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635686 4871 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635695 4871 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635704 4871 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635714 4871 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635725 4871 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635733 4871 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635742 4871 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635754 4871 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635762 4871 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635770 4871 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635778 4871 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635786 4871 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.635793 4871 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.635931 4871 flags.go:64] FLAG: --address="0.0.0.0" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.635946 4871 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.635961 4871 flags.go:64] FLAG: --anonymous-auth="true" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.635973 4871 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.635984 4871 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.635994 4871 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636004 4871 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636015 4871 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636024 4871 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636034 4871 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636044 4871 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636053 4871 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636063 4871 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636079 4871 flags.go:64] FLAG: --cgroup-root="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636089 4871 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636098 4871 flags.go:64] FLAG: --client-ca-file="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636107 4871 flags.go:64] FLAG: --cloud-config="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636116 4871 flags.go:64] FLAG: --cloud-provider="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636125 4871 flags.go:64] FLAG: --cluster-dns="[]" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636135 4871 flags.go:64] FLAG: --cluster-domain="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636144 4871 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636153 4871 flags.go:64] FLAG: --config-dir="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636162 4871 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636172 4871 flags.go:64] FLAG: --container-log-max-files="5" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636183 4871 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636192 4871 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636202 4871 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636211 4871 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636220 4871 flags.go:64] FLAG: --contention-profiling="false" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636229 4871 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636238 4871 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636247 4871 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636256 4871 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636267 4871 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636276 4871 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636285 4871 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636294 4871 flags.go:64] FLAG: --enable-load-reader="false" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636303 4871 flags.go:64] FLAG: --enable-server="true" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636312 4871 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636323 4871 flags.go:64] FLAG: --event-burst="100" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636333 4871 flags.go:64] FLAG: --event-qps="50" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636341 4871 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636350 4871 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636359 4871 flags.go:64] FLAG: --eviction-hard="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636370 4871 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636379 4871 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636388 4871 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636397 4871 flags.go:64] FLAG: --eviction-soft="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636406 4871 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636416 4871 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636425 4871 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636434 4871 flags.go:64] FLAG: --experimental-mounter-path="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636445 4871 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636456 4871 flags.go:64] FLAG: --fail-swap-on="true" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636467 4871 flags.go:64] FLAG: --feature-gates="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636481 4871 flags.go:64] FLAG: --file-check-frequency="20s" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636492 4871 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636504 4871 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636518 4871 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636529 4871 flags.go:64] FLAG: --healthz-port="10248" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636541 4871 flags.go:64] FLAG: --help="false" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636551 4871 flags.go:64] FLAG: --hostname-override="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636560 4871 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636569 4871 flags.go:64] FLAG: --http-check-frequency="20s" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636578 4871 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636616 4871 flags.go:64] FLAG: --image-credential-provider-config="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636626 4871 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636635 4871 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636643 4871 flags.go:64] FLAG: --image-service-endpoint="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636652 4871 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636661 4871 flags.go:64] FLAG: --kube-api-burst="100" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636670 4871 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636680 4871 flags.go:64] FLAG: --kube-api-qps="50" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636688 4871 flags.go:64] FLAG: --kube-reserved="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636697 4871 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636706 4871 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636716 4871 flags.go:64] FLAG: --kubelet-cgroups="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636726 4871 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636734 4871 flags.go:64] FLAG: --lock-file="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636743 4871 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636751 4871 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636760 4871 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636774 4871 flags.go:64] FLAG: --log-json-split-stream="false" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636783 4871 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636792 4871 flags.go:64] FLAG: --log-text-split-stream="false" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636802 4871 flags.go:64] FLAG: --logging-format="text" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636811 4871 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636821 4871 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636830 4871 flags.go:64] FLAG: --manifest-url="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636838 4871 flags.go:64] FLAG: --manifest-url-header="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636851 4871 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636860 4871 flags.go:64] FLAG: --max-open-files="1000000" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636877 4871 flags.go:64] FLAG: --max-pods="110" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636887 4871 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636896 4871 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636905 4871 flags.go:64] FLAG: --memory-manager-policy="None" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636914 4871 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636923 4871 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636933 4871 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636942 4871 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636959 4871 flags.go:64] FLAG: --node-status-max-images="50" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636968 4871 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636977 4871 flags.go:64] FLAG: --oom-score-adj="-999" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636987 4871 flags.go:64] FLAG: --pod-cidr="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.636996 4871 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637008 4871 flags.go:64] FLAG: --pod-manifest-path="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637017 4871 flags.go:64] FLAG: --pod-max-pids="-1" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637026 4871 flags.go:64] FLAG: --pods-per-core="0" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637036 4871 flags.go:64] FLAG: --port="10250" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637045 4871 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637054 4871 flags.go:64] FLAG: --provider-id="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637062 4871 flags.go:64] FLAG: --qos-reserved="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637071 4871 flags.go:64] FLAG: --read-only-port="10255" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637080 4871 flags.go:64] FLAG: --register-node="true" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637089 4871 flags.go:64] FLAG: --register-schedulable="true" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637098 4871 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637112 4871 flags.go:64] FLAG: --registry-burst="10" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637121 4871 flags.go:64] FLAG: --registry-qps="5" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637130 4871 flags.go:64] FLAG: --reserved-cpus="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637139 4871 flags.go:64] FLAG: --reserved-memory="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637150 4871 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637160 4871 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637205 4871 flags.go:64] FLAG: --rotate-certificates="false" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637216 4871 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637225 4871 flags.go:64] FLAG: --runonce="false" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637234 4871 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637243 4871 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637253 4871 flags.go:64] FLAG: --seccomp-default="false" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637262 4871 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637271 4871 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637280 4871 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637289 4871 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637298 4871 flags.go:64] FLAG: --storage-driver-password="root" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637307 4871 flags.go:64] FLAG: --storage-driver-secure="false" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637316 4871 flags.go:64] FLAG: --storage-driver-table="stats" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637325 4871 flags.go:64] FLAG: --storage-driver-user="root" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637334 4871 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637343 4871 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637352 4871 flags.go:64] FLAG: --system-cgroups="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637361 4871 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637375 4871 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637383 4871 flags.go:64] FLAG: --tls-cert-file="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637392 4871 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637402 4871 flags.go:64] FLAG: --tls-min-version="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637411 4871 flags.go:64] FLAG: --tls-private-key-file="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637420 4871 flags.go:64] FLAG: --topology-manager-policy="none" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637428 4871 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637437 4871 flags.go:64] FLAG: --topology-manager-scope="container" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637446 4871 flags.go:64] FLAG: --v="2" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637457 4871 flags.go:64] FLAG: --version="false" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637468 4871 flags.go:64] FLAG: --vmodule="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637479 4871 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.637488 4871 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637749 4871 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637764 4871 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637775 4871 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637784 4871 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637795 4871 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637805 4871 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637813 4871 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637821 4871 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637829 4871 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637837 4871 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637845 4871 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637855 4871 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637866 4871 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637874 4871 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637882 4871 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637890 4871 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637899 4871 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637907 4871 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637915 4871 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637923 4871 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637930 4871 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637938 4871 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637947 4871 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637954 4871 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637962 4871 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637970 4871 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637978 4871 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637986 4871 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.637996 4871 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638005 4871 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638013 4871 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638021 4871 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638029 4871 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638037 4871 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638050 4871 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638057 4871 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638065 4871 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638074 4871 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638082 4871 feature_gate.go:330] unrecognized feature gate: Example Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638090 4871 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638099 4871 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638107 4871 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638115 4871 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638124 4871 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638131 4871 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638142 4871 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638151 4871 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638159 4871 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638167 4871 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638176 4871 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638184 4871 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638191 4871 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638199 4871 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638207 4871 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638215 4871 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638222 4871 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638230 4871 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638238 4871 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638245 4871 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638253 4871 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638261 4871 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638269 4871 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638277 4871 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638284 4871 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638295 4871 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638305 4871 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638316 4871 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638324 4871 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638332 4871 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638341 4871 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.638349 4871 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.640194 4871 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.653033 4871 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.653090 4871 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653214 4871 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653229 4871 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653242 4871 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653253 4871 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653264 4871 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653275 4871 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653286 4871 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653296 4871 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653311 4871 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653328 4871 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653339 4871 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653350 4871 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653360 4871 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653370 4871 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653380 4871 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653390 4871 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653400 4871 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653413 4871 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653423 4871 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653433 4871 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653443 4871 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653453 4871 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653463 4871 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653477 4871 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653491 4871 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653501 4871 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653512 4871 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653525 4871 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653540 4871 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653553 4871 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653564 4871 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653574 4871 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653584 4871 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653634 4871 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653645 4871 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653656 4871 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653667 4871 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653677 4871 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653686 4871 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653694 4871 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653702 4871 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653710 4871 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653717 4871 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653725 4871 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653732 4871 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653741 4871 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653748 4871 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653756 4871 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653764 4871 feature_gate.go:330] unrecognized feature gate: Example Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653772 4871 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653781 4871 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653789 4871 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653797 4871 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653818 4871 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653826 4871 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653834 4871 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653841 4871 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653849 4871 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653856 4871 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653864 4871 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653872 4871 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653879 4871 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653887 4871 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653894 4871 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653902 4871 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653910 4871 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653918 4871 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653925 4871 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653933 4871 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653941 4871 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.653948 4871 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.653962 4871 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654243 4871 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654263 4871 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654275 4871 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654287 4871 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654298 4871 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654308 4871 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654319 4871 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654330 4871 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654341 4871 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654353 4871 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654363 4871 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654372 4871 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654383 4871 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654392 4871 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654402 4871 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654414 4871 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654422 4871 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654431 4871 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654438 4871 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654446 4871 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654454 4871 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654461 4871 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654469 4871 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654476 4871 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654485 4871 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654493 4871 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654500 4871 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654508 4871 feature_gate.go:330] unrecognized feature gate: Example Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654516 4871 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654523 4871 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654531 4871 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654539 4871 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654547 4871 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654554 4871 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654562 4871 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654570 4871 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654577 4871 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654619 4871 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654630 4871 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654640 4871 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654651 4871 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654665 4871 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654678 4871 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654689 4871 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654699 4871 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654709 4871 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654719 4871 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654733 4871 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654745 4871 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654756 4871 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654767 4871 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654778 4871 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654789 4871 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654801 4871 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654810 4871 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654820 4871 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654830 4871 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654839 4871 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654848 4871 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654858 4871 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654870 4871 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654883 4871 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654895 4871 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654906 4871 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654917 4871 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654927 4871 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654938 4871 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654948 4871 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654957 4871 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654967 4871 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.654976 4871 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.654992 4871 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.655328 4871 server.go:940] "Client rotation is on, will bootstrap in background" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.665620 4871 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.665765 4871 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.667470 4871 server.go:997] "Starting client certificate rotation" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.667503 4871 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.667702 4871 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-22 03:36:15.388059681 +0000 UTC Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.668070 4871 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.698082 4871 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.699573 4871 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 15:17:28 crc kubenswrapper[4871]: E0128 15:17:28.702970 4871 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.716939 4871 log.go:25] "Validated CRI v1 runtime API" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.759859 4871 log.go:25] "Validated CRI v1 image API" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.762630 4871 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.769383 4871 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-28-15-12-37-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.769417 4871 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.788202 4871 manager.go:217] Machine: {Timestamp:2026-01-28 15:17:28.785516245 +0000 UTC m=+0.681354617 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:fc09a3b6-b11a-4dc2-972c-09bf48a77414 BootID:5fb24c48-4a41-4f44-93d2-0105f9c98753 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:5e:1f:49 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:5e:1f:49 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d4:ee:1a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:2e:7b:ff Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:b6:02:fc Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:a7:7c:a6 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:35:2c:d9 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:7a:65:c1:f7:45:f2 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:16:b9:14:6d:e8:dd Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.788551 4871 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.788755 4871 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.791906 4871 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.792109 4871 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.792150 4871 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.792399 4871 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.792411 4871 container_manager_linux.go:303] "Creating device plugin manager" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.792978 4871 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.793011 4871 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.793223 4871 state_mem.go:36] "Initialized new in-memory state store" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.793320 4871 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.801894 4871 kubelet.go:418] "Attempting to sync node with API server" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.801930 4871 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.801967 4871 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.802022 4871 kubelet.go:324] "Adding apiserver pod source" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.802052 4871 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.809704 4871 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Jan 28 15:17:28 crc kubenswrapper[4871]: E0128 15:17:28.809846 4871 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.809952 4871 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Jan 28 15:17:28 crc kubenswrapper[4871]: E0128 15:17:28.810040 4871 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.813350 4871 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.815620 4871 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.816904 4871 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.820670 4871 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.820689 4871 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.820696 4871 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.820703 4871 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.820713 4871 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.820720 4871 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.820727 4871 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.820738 4871 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.820746 4871 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.820755 4871 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.820765 4871 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.820772 4871 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.823630 4871 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.824075 4871 server.go:1280] "Started kubelet" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.824319 4871 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.824663 4871 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 15:17:28 crc systemd[1]: Started Kubernetes Kubelet. Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.825734 4871 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.826401 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.826432 4871 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.826795 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 08:41:03.102822222 +0000 UTC Jan 28 15:17:28 crc kubenswrapper[4871]: E0128 15:17:28.827172 4871 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.827222 4871 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.827173 4871 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.827315 4871 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.827392 4871 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.828880 4871 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Jan 28 15:17:28 crc kubenswrapper[4871]: E0128 15:17:28.829096 4871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="200ms" Jan 28 15:17:28 crc kubenswrapper[4871]: E0128 15:17:28.829064 4871 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.829220 4871 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.829253 4871 factory.go:55] Registering systemd factory Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.829272 4871 factory.go:221] Registration of the systemd container factory successfully Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.830033 4871 server.go:460] "Adding debug handlers to kubelet server" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.830209 4871 factory.go:153] Registering CRI-O factory Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.830226 4871 factory.go:221] Registration of the crio container factory successfully Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.830249 4871 factory.go:103] Registering Raw factory Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.830267 4871 manager.go:1196] Started watching for new ooms in manager Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.831089 4871 manager.go:319] Starting recovery of all containers Jan 28 15:17:28 crc kubenswrapper[4871]: E0128 15:17:28.833268 4871 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.199:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188eee0e4ee948ca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:17:28.824047818 +0000 UTC m=+0.719886140,LastTimestamp:2026-01-28 15:17:28.824047818 +0000 UTC m=+0.719886140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.852144 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.852242 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.852276 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.852304 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.852332 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.852361 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.852387 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.852410 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.852466 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.852506 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.852539 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.852569 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.852633 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.852668 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.852696 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.852721 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.852752 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.852779 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.852819 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.852849 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.852879 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.852918 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.852948 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.852973 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853004 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853031 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853067 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853143 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853224 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853254 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853333 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853367 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853397 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853425 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853466 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853517 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853556 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853612 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853643 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853669 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853693 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853719 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853744 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853768 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853800 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853828 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853853 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853932 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853967 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.853996 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854023 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854052 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854092 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854122 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854154 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854184 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854210 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854236 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854262 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854301 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854348 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854385 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854422 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854451 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854494 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854525 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854564 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854618 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854751 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854785 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854812 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854837 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854862 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854891 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854920 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854946 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.854974 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.855015 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.855062 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.855089 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.855119 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.855146 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.855184 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.855216 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.855255 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.855283 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.855309 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.855334 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.855360 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.855388 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.855446 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.855473 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.855520 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.855547 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.855576 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.855672 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.855707 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.857858 4871 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.857919 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.857950 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.857979 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.858008 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.858037 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.860293 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.860345 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.860395 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.860434 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.860479 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.860509 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.860659 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.860703 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.860738 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.860772 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.860802 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.860831 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.860865 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.860916 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.860945 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.860973 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861001 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861029 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861061 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861088 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861115 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861143 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861171 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861199 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861225 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861256 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861286 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861314 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861340 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861367 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861394 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861422 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861451 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861479 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861505 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861537 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861562 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861632 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861665 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861708 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861740 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861770 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861799 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861826 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861854 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861884 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861910 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861940 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861970 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.861999 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862033 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862062 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862093 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862119 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862151 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862180 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862208 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862241 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862271 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862313 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862344 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862373 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862398 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862427 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862455 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862482 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862509 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862538 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862568 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862646 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862695 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862726 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862755 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862784 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862812 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862841 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862869 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862901 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862928 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.862973 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863001 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863065 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863104 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863134 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863161 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863188 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863215 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863253 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863279 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863312 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863340 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863369 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863406 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863433 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863460 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863487 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863512 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863537 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863569 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863634 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863664 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863690 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863717 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863745 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863773 4871 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863799 4871 reconstruct.go:97] "Volume reconstruction finished" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.863818 4871 reconciler.go:26] "Reconciler: start to sync state" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.864280 4871 manager.go:324] Recovery completed Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.876729 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.878677 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.878755 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.878775 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.879786 4871 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.879819 4871 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.879859 4871 state_mem.go:36] "Initialized new in-memory state store" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.898009 4871 policy_none.go:49] "None policy: Start" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.899247 4871 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.899284 4871 state_mem.go:35] "Initializing new in-memory state store" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.899725 4871 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.902626 4871 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.902663 4871 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.902689 4871 kubelet.go:2335] "Starting kubelet main sync loop" Jan 28 15:17:28 crc kubenswrapper[4871]: E0128 15:17:28.902838 4871 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 15:17:28 crc kubenswrapper[4871]: W0128 15:17:28.903361 4871 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Jan 28 15:17:28 crc kubenswrapper[4871]: E0128 15:17:28.903407 4871 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:17:28 crc kubenswrapper[4871]: E0128 15:17:28.927568 4871 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.971686 4871 manager.go:334] "Starting Device Plugin manager" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.971950 4871 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.971974 4871 server.go:79] "Starting device plugin registration server" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.972393 4871 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.972414 4871 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.972975 4871 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.973083 4871 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 28 15:17:28 crc kubenswrapper[4871]: I0128 15:17:28.973103 4871 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 15:17:28 crc kubenswrapper[4871]: E0128 15:17:28.981293 4871 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.003696 4871 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.003801 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.004924 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.004957 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.004968 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.005112 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.005407 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.005474 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.005798 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.005830 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.005843 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.005957 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.006791 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.006835 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.006859 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.006871 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.006898 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.008966 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.009006 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.009033 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.009263 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.009314 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.009363 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.009385 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.009575 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.009712 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.010514 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.011364 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.011388 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.011489 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.011649 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.011708 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.013022 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.013045 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.013068 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.013163 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.013181 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.013192 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.013234 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.013257 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.013542 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.013747 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.013910 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.014176 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.014224 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.014250 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:29 crc kubenswrapper[4871]: E0128 15:17:29.029915 4871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="400ms" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.066426 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.066483 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.066519 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.066550 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.066609 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.066644 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.066703 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.066749 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.066823 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.066907 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.066999 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.067031 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.067061 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.067091 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.067123 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.072734 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.073987 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.074050 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.074077 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.074141 4871 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:17:29 crc kubenswrapper[4871]: E0128 15:17:29.074675 4871 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.199:6443: connect: connection refused" node="crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168012 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168089 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168124 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168151 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168179 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168206 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168225 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168235 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168301 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168316 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168336 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168363 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168421 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168369 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168314 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168379 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168393 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168372 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168620 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168674 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168710 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168739 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168769 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168795 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168938 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.168972 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.169009 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.169034 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.169070 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.169115 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.275831 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.277710 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.277769 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.277782 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.277813 4871 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:17:29 crc kubenswrapper[4871]: E0128 15:17:29.278194 4871 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.199:6443: connect: connection refused" node="crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.349206 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.356936 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.383890 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: W0128 15:17:29.402080 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-e18aba83002ac5991cdf9f2beb80051691093259c92e495dc1fce86c22b9f1d9 WatchSource:0}: Error finding container e18aba83002ac5991cdf9f2beb80051691093259c92e495dc1fce86c22b9f1d9: Status 404 returned error can't find the container with id e18aba83002ac5991cdf9f2beb80051691093259c92e495dc1fce86c22b9f1d9 Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.404398 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: W0128 15:17:29.406982 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-fdbb1d68b37799b039f4d2a895e780958c178eb540f05869f0340d949d030261 WatchSource:0}: Error finding container fdbb1d68b37799b039f4d2a895e780958c178eb540f05869f0340d949d030261: Status 404 returned error can't find the container with id fdbb1d68b37799b039f4d2a895e780958c178eb540f05869f0340d949d030261 Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.411689 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:17:29 crc kubenswrapper[4871]: W0128 15:17:29.413830 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-ff8524f27ffd9992025fb44b784fd7c3cd978628adc532080c5192693651d32e WatchSource:0}: Error finding container ff8524f27ffd9992025fb44b784fd7c3cd978628adc532080c5192693651d32e: Status 404 returned error can't find the container with id ff8524f27ffd9992025fb44b784fd7c3cd978628adc532080c5192693651d32e Jan 28 15:17:29 crc kubenswrapper[4871]: W0128 15:17:29.423766 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-0e77566e920e22a495ef92b8cf9c2fd2ba8ba90a27064d477a20ffbfbdb66725 WatchSource:0}: Error finding container 0e77566e920e22a495ef92b8cf9c2fd2ba8ba90a27064d477a20ffbfbdb66725: Status 404 returned error can't find the container with id 0e77566e920e22a495ef92b8cf9c2fd2ba8ba90a27064d477a20ffbfbdb66725 Jan 28 15:17:29 crc kubenswrapper[4871]: W0128 15:17:29.425536 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-e8be419bcd81531e477a98465f32f308b90f45f2de4a1615d180491dfdab4cf2 WatchSource:0}: Error finding container e8be419bcd81531e477a98465f32f308b90f45f2de4a1615d180491dfdab4cf2: Status 404 returned error can't find the container with id e8be419bcd81531e477a98465f32f308b90f45f2de4a1615d180491dfdab4cf2 Jan 28 15:17:29 crc kubenswrapper[4871]: E0128 15:17:29.430579 4871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="800ms" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.679124 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.681519 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.681580 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.681615 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.681659 4871 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:17:29 crc kubenswrapper[4871]: E0128 15:17:29.682331 4871 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.199:6443: connect: connection refused" node="crc" Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.827372 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 19:04:25.237856784 +0000 UTC Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.827984 4871 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.915180 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e8be419bcd81531e477a98465f32f308b90f45f2de4a1615d180491dfdab4cf2"} Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.916704 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0e77566e920e22a495ef92b8cf9c2fd2ba8ba90a27064d477a20ffbfbdb66725"} Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.918359 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ff8524f27ffd9992025fb44b784fd7c3cd978628adc532080c5192693651d32e"} Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.920515 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fdbb1d68b37799b039f4d2a895e780958c178eb540f05869f0340d949d030261"} Jan 28 15:17:29 crc kubenswrapper[4871]: I0128 15:17:29.921866 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e18aba83002ac5991cdf9f2beb80051691093259c92e495dc1fce86c22b9f1d9"} Jan 28 15:17:29 crc kubenswrapper[4871]: W0128 15:17:29.936478 4871 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Jan 28 15:17:29 crc kubenswrapper[4871]: E0128 15:17:29.936584 4871 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:17:29 crc kubenswrapper[4871]: W0128 15:17:29.994156 4871 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Jan 28 15:17:29 crc kubenswrapper[4871]: E0128 15:17:29.994270 4871 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:17:30 crc kubenswrapper[4871]: E0128 15:17:30.232124 4871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="1.6s" Jan 28 15:17:30 crc kubenswrapper[4871]: W0128 15:17:30.381202 4871 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Jan 28 15:17:30 crc kubenswrapper[4871]: E0128 15:17:30.381325 4871 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:17:30 crc kubenswrapper[4871]: W0128 15:17:30.383833 4871 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Jan 28 15:17:30 crc kubenswrapper[4871]: E0128 15:17:30.383910 4871 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:17:30 crc kubenswrapper[4871]: E0128 15:17:30.386992 4871 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.199:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188eee0e4ee948ca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:17:28.824047818 +0000 UTC m=+0.719886140,LastTimestamp:2026-01-28 15:17:28.824047818 +0000 UTC m=+0.719886140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.483217 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.484872 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.484931 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.484949 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.484981 4871 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:17:30 crc kubenswrapper[4871]: E0128 15:17:30.485721 4871 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.199:6443: connect: connection refused" node="crc" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.791796 4871 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 15:17:30 crc kubenswrapper[4871]: E0128 15:17:30.792695 4871 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.827581 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 14:14:56.171846429 +0000 UTC Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.828411 4871 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.927366 4871 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="e0ff221c97a59e61461b6518db7c1e3b30bf02f544c14f247fa74ceea7317d9d" exitCode=0 Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.927457 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"e0ff221c97a59e61461b6518db7c1e3b30bf02f544c14f247fa74ceea7317d9d"} Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.927519 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.928794 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.928858 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.928876 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.930663 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934"} Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.930737 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30"} Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.930761 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219"} Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.933000 4871 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="a553dce0e11fd395cef308e3ec6756e7670462597b7eb183a4efbb1e0f638398" exitCode=0 Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.933117 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"a553dce0e11fd395cef308e3ec6756e7670462597b7eb183a4efbb1e0f638398"} Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.933125 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.934311 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.934344 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.934357 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.940571 4871 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a" exitCode=0 Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.940714 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.940632 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a"} Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.942330 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.942529 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.942748 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.943198 4871 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb" exitCode=0 Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.943259 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb"} Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.943331 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.944251 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.944298 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.944315 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.949418 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.950747 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.950777 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:30 crc kubenswrapper[4871]: I0128 15:17:30.950792 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.827824 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 19:59:19.792034054 +0000 UTC Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.828622 4871 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Jan 28 15:17:31 crc kubenswrapper[4871]: E0128 15:17:31.833130 4871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="3.2s" Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.954729 4871 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d" exitCode=0 Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.954881 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d"} Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.955145 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.956424 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.956463 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.956480 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.960663 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a1efc7e9400c3dfe89bbbdb0b78c108d7a5d013c433d5b62f53fe338f5d49b95"} Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.960753 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.966609 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.966654 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.966664 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.968091 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2"} Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.968140 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.969086 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.969109 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.969121 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.971671 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.971953 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bd606211ea6e58c66c05dd5ecaf3a0c220b19bc7c82d1ea8d298ae82bd1b675e"} Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.971978 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5c31213dea27da9a677ea193b8780cc0e632e1263829731856bf682a3e1853e6"} Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.971991 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0a2644afc15f39effb1daa734bb53944ad9c7e05e72fbf8e8627879b5cc6c473"} Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.972518 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.972554 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.972565 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.982841 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f"} Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.982926 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0"} Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.982947 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806"} Jan 28 15:17:31 crc kubenswrapper[4871]: I0128 15:17:31.982958 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572"} Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.086190 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.087672 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.087752 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.087769 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.087802 4871 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:17:32 crc kubenswrapper[4871]: E0128 15:17:32.088504 4871 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.199:6443: connect: connection refused" node="crc" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.828092 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 23:58:13.208588976 +0000 UTC Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.990674 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cd62c3a8dcc49414acdbe9951aa7893f4f0be579f9496397ab2cd417ce2e5b6a"} Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.990712 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.991846 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.991924 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.991950 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.993922 4871 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a" exitCode=0 Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.994003 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a"} Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.994028 4871 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.994061 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.994116 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.994131 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.994188 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.995458 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.995486 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.995498 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.995514 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.995555 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.995579 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.995856 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.995903 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.995923 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.995923 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.995976 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:32 crc kubenswrapper[4871]: I0128 15:17:32.996014 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:33 crc kubenswrapper[4871]: I0128 15:17:33.828651 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 05:51:34.171474912 +0000 UTC Jan 28 15:17:34 crc kubenswrapper[4871]: I0128 15:17:34.004196 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:34 crc kubenswrapper[4871]: I0128 15:17:34.004394 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b"} Jan 28 15:17:34 crc kubenswrapper[4871]: I0128 15:17:34.004536 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47"} Jan 28 15:17:34 crc kubenswrapper[4871]: I0128 15:17:34.004573 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:17:34 crc kubenswrapper[4871]: I0128 15:17:34.004644 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1"} Jan 28 15:17:34 crc kubenswrapper[4871]: I0128 15:17:34.004664 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f"} Jan 28 15:17:34 crc kubenswrapper[4871]: I0128 15:17:34.005708 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:34 crc kubenswrapper[4871]: I0128 15:17:34.005756 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:34 crc kubenswrapper[4871]: I0128 15:17:34.005773 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:34 crc kubenswrapper[4871]: I0128 15:17:34.551003 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:17:34 crc kubenswrapper[4871]: I0128 15:17:34.551273 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:34 crc kubenswrapper[4871]: I0128 15:17:34.553189 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:34 crc kubenswrapper[4871]: I0128 15:17:34.553246 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:34 crc kubenswrapper[4871]: I0128 15:17:34.553264 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:34 crc kubenswrapper[4871]: I0128 15:17:34.829281 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 03:59:03.211659762 +0000 UTC Jan 28 15:17:34 crc kubenswrapper[4871]: I0128 15:17:34.982457 4871 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 15:17:35 crc kubenswrapper[4871]: I0128 15:17:35.011152 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f"} Jan 28 15:17:35 crc kubenswrapper[4871]: I0128 15:17:35.011219 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:35 crc kubenswrapper[4871]: I0128 15:17:35.011282 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:35 crc kubenswrapper[4871]: I0128 15:17:35.012579 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:35 crc kubenswrapper[4871]: I0128 15:17:35.012673 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:35 crc kubenswrapper[4871]: I0128 15:17:35.012693 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:35 crc kubenswrapper[4871]: I0128 15:17:35.013989 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:35 crc kubenswrapper[4871]: I0128 15:17:35.014039 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:35 crc kubenswrapper[4871]: I0128 15:17:35.014052 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:35 crc kubenswrapper[4871]: I0128 15:17:35.289258 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:35 crc kubenswrapper[4871]: I0128 15:17:35.291130 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:35 crc kubenswrapper[4871]: I0128 15:17:35.291184 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:35 crc kubenswrapper[4871]: I0128 15:17:35.291203 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:35 crc kubenswrapper[4871]: I0128 15:17:35.291238 4871 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:17:35 crc kubenswrapper[4871]: I0128 15:17:35.830068 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 14:03:34.331616994 +0000 UTC Jan 28 15:17:35 crc kubenswrapper[4871]: I0128 15:17:35.852743 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:17:36 crc kubenswrapper[4871]: I0128 15:17:36.014077 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:36 crc kubenswrapper[4871]: I0128 15:17:36.014136 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:36 crc kubenswrapper[4871]: I0128 15:17:36.015672 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:36 crc kubenswrapper[4871]: I0128 15:17:36.015714 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:36 crc kubenswrapper[4871]: I0128 15:17:36.015737 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:36 crc kubenswrapper[4871]: I0128 15:17:36.017962 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:36 crc kubenswrapper[4871]: I0128 15:17:36.018086 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:36 crc kubenswrapper[4871]: I0128 15:17:36.018116 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:36 crc kubenswrapper[4871]: I0128 15:17:36.271380 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 28 15:17:36 crc kubenswrapper[4871]: I0128 15:17:36.533511 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:17:36 crc kubenswrapper[4871]: I0128 15:17:36.831048 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 08:39:01.935172574 +0000 UTC Jan 28 15:17:37 crc kubenswrapper[4871]: I0128 15:17:37.017084 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:37 crc kubenswrapper[4871]: I0128 15:17:37.017086 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:37 crc kubenswrapper[4871]: I0128 15:17:37.018145 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:37 crc kubenswrapper[4871]: I0128 15:17:37.018181 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:37 crc kubenswrapper[4871]: I0128 15:17:37.018192 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:37 crc kubenswrapper[4871]: I0128 15:17:37.018320 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:37 crc kubenswrapper[4871]: I0128 15:17:37.018351 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:37 crc kubenswrapper[4871]: I0128 15:17:37.018360 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:37 crc kubenswrapper[4871]: I0128 15:17:37.831776 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 03:07:14.596925289 +0000 UTC Jan 28 15:17:37 crc kubenswrapper[4871]: I0128 15:17:37.886221 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:17:37 crc kubenswrapper[4871]: I0128 15:17:37.886462 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:37 crc kubenswrapper[4871]: I0128 15:17:37.888024 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:37 crc kubenswrapper[4871]: I0128 15:17:37.888091 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:37 crc kubenswrapper[4871]: I0128 15:17:37.888115 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:38 crc kubenswrapper[4871]: I0128 15:17:38.299732 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:17:38 crc kubenswrapper[4871]: I0128 15:17:38.300042 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:38 crc kubenswrapper[4871]: I0128 15:17:38.301515 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:38 crc kubenswrapper[4871]: I0128 15:17:38.301565 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:38 crc kubenswrapper[4871]: I0128 15:17:38.301581 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:38 crc kubenswrapper[4871]: I0128 15:17:38.832746 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 03:35:44.71321939 +0000 UTC Jan 28 15:17:38 crc kubenswrapper[4871]: I0128 15:17:38.922482 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 28 15:17:38 crc kubenswrapper[4871]: I0128 15:17:38.922862 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:38 crc kubenswrapper[4871]: I0128 15:17:38.924706 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:38 crc kubenswrapper[4871]: I0128 15:17:38.924766 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:38 crc kubenswrapper[4871]: I0128 15:17:38.924778 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:38 crc kubenswrapper[4871]: E0128 15:17:38.981525 4871 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 15:17:39 crc kubenswrapper[4871]: I0128 15:17:39.266628 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:17:39 crc kubenswrapper[4871]: I0128 15:17:39.266977 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:39 crc kubenswrapper[4871]: I0128 15:17:39.269328 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:39 crc kubenswrapper[4871]: I0128 15:17:39.269489 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:39 crc kubenswrapper[4871]: I0128 15:17:39.269511 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:39 crc kubenswrapper[4871]: I0128 15:17:39.272655 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:17:39 crc kubenswrapper[4871]: I0128 15:17:39.833997 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 12:56:54.657233426 +0000 UTC Jan 28 15:17:39 crc kubenswrapper[4871]: I0128 15:17:39.904795 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:17:40 crc kubenswrapper[4871]: I0128 15:17:40.026072 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:40 crc kubenswrapper[4871]: I0128 15:17:40.027712 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:40 crc kubenswrapper[4871]: I0128 15:17:40.027785 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:40 crc kubenswrapper[4871]: I0128 15:17:40.027812 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:40 crc kubenswrapper[4871]: I0128 15:17:40.031522 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:17:40 crc kubenswrapper[4871]: I0128 15:17:40.834961 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 02:18:44.314413009 +0000 UTC Jan 28 15:17:41 crc kubenswrapper[4871]: I0128 15:17:41.028058 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:41 crc kubenswrapper[4871]: I0128 15:17:41.029086 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:41 crc kubenswrapper[4871]: I0128 15:17:41.029142 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:41 crc kubenswrapper[4871]: I0128 15:17:41.029163 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:41 crc kubenswrapper[4871]: I0128 15:17:41.836092 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 10:43:47.222169692 +0000 UTC Jan 28 15:17:42 crc kubenswrapper[4871]: I0128 15:17:42.030376 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:42 crc kubenswrapper[4871]: I0128 15:17:42.031368 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:42 crc kubenswrapper[4871]: I0128 15:17:42.031401 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:42 crc kubenswrapper[4871]: I0128 15:17:42.031411 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:42 crc kubenswrapper[4871]: W0128 15:17:42.593281 4871 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 28 15:17:42 crc kubenswrapper[4871]: I0128 15:17:42.593368 4871 trace.go:236] Trace[1503698536]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 15:17:32.591) (total time: 10001ms): Jan 28 15:17:42 crc kubenswrapper[4871]: Trace[1503698536]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (15:17:42.593) Jan 28 15:17:42 crc kubenswrapper[4871]: Trace[1503698536]: [10.001563505s] [10.001563505s] END Jan 28 15:17:42 crc kubenswrapper[4871]: E0128 15:17:42.593387 4871 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 28 15:17:42 crc kubenswrapper[4871]: W0128 15:17:42.808132 4871 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 28 15:17:42 crc kubenswrapper[4871]: I0128 15:17:42.808251 4871 trace.go:236] Trace[434505804]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 15:17:32.806) (total time: 10001ms): Jan 28 15:17:42 crc kubenswrapper[4871]: Trace[434505804]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (15:17:42.808) Jan 28 15:17:42 crc kubenswrapper[4871]: Trace[434505804]: [10.001320317s] [10.001320317s] END Jan 28 15:17:42 crc kubenswrapper[4871]: E0128 15:17:42.808279 4871 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 28 15:17:42 crc kubenswrapper[4871]: I0128 15:17:42.828863 4871 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 28 15:17:42 crc kubenswrapper[4871]: I0128 15:17:42.836897 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:39:59.723883419 +0000 UTC Jan 28 15:17:42 crc kubenswrapper[4871]: I0128 15:17:42.904815 4871 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 15:17:42 crc kubenswrapper[4871]: I0128 15:17:42.904894 4871 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 15:17:42 crc kubenswrapper[4871]: I0128 15:17:42.934193 4871 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36910->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 28 15:17:42 crc kubenswrapper[4871]: I0128 15:17:42.934257 4871 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36910->192.168.126.11:17697: read: connection reset by peer" Jan 28 15:17:43 crc kubenswrapper[4871]: I0128 15:17:43.034609 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 15:17:43 crc kubenswrapper[4871]: I0128 15:17:43.036716 4871 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cd62c3a8dcc49414acdbe9951aa7893f4f0be579f9496397ab2cd417ce2e5b6a" exitCode=255 Jan 28 15:17:43 crc kubenswrapper[4871]: I0128 15:17:43.036762 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"cd62c3a8dcc49414acdbe9951aa7893f4f0be579f9496397ab2cd417ce2e5b6a"} Jan 28 15:17:43 crc kubenswrapper[4871]: I0128 15:17:43.036924 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:43 crc kubenswrapper[4871]: I0128 15:17:43.038353 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:43 crc kubenswrapper[4871]: I0128 15:17:43.038470 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:43 crc kubenswrapper[4871]: I0128 15:17:43.038551 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:43 crc kubenswrapper[4871]: I0128 15:17:43.039108 4871 scope.go:117] "RemoveContainer" containerID="cd62c3a8dcc49414acdbe9951aa7893f4f0be579f9496397ab2cd417ce2e5b6a" Jan 28 15:17:43 crc kubenswrapper[4871]: W0128 15:17:43.045627 4871 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 28 15:17:43 crc kubenswrapper[4871]: I0128 15:17:43.045781 4871 trace.go:236] Trace[1733312887]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 15:17:33.043) (total time: 10002ms): Jan 28 15:17:43 crc kubenswrapper[4871]: Trace[1733312887]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (15:17:43.045) Jan 28 15:17:43 crc kubenswrapper[4871]: Trace[1733312887]: [10.002113062s] [10.002113062s] END Jan 28 15:17:43 crc kubenswrapper[4871]: E0128 15:17:43.045915 4871 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 28 15:17:43 crc kubenswrapper[4871]: W0128 15:17:43.115783 4871 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 28 15:17:43 crc kubenswrapper[4871]: I0128 15:17:43.115908 4871 trace.go:236] Trace[1855005609]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 15:17:33.114) (total time: 10001ms): Jan 28 15:17:43 crc kubenswrapper[4871]: Trace[1855005609]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (15:17:43.115) Jan 28 15:17:43 crc kubenswrapper[4871]: Trace[1855005609]: [10.00108876s] [10.00108876s] END Jan 28 15:17:43 crc kubenswrapper[4871]: E0128 15:17:43.115936 4871 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 28 15:17:43 crc kubenswrapper[4871]: I0128 15:17:43.253849 4871 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 28 15:17:43 crc kubenswrapper[4871]: I0128 15:17:43.253923 4871 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 28 15:17:43 crc kubenswrapper[4871]: I0128 15:17:43.259687 4871 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 28 15:17:43 crc kubenswrapper[4871]: I0128 15:17:43.259736 4871 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 28 15:17:43 crc kubenswrapper[4871]: I0128 15:17:43.837355 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 15:32:24.468116907 +0000 UTC Jan 28 15:17:44 crc kubenswrapper[4871]: I0128 15:17:44.042021 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 15:17:44 crc kubenswrapper[4871]: I0128 15:17:44.044314 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb"} Jan 28 15:17:44 crc kubenswrapper[4871]: I0128 15:17:44.044568 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:44 crc kubenswrapper[4871]: I0128 15:17:44.046129 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:44 crc kubenswrapper[4871]: I0128 15:17:44.046181 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:44 crc kubenswrapper[4871]: I0128 15:17:44.046198 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:44 crc kubenswrapper[4871]: I0128 15:17:44.838630 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 21:09:12.487709806 +0000 UTC Jan 28 15:17:45 crc kubenswrapper[4871]: I0128 15:17:45.839168 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 09:38:42.737993627 +0000 UTC Jan 28 15:17:45 crc kubenswrapper[4871]: I0128 15:17:45.860685 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:17:45 crc kubenswrapper[4871]: I0128 15:17:45.860960 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:45 crc kubenswrapper[4871]: I0128 15:17:45.861071 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:17:45 crc kubenswrapper[4871]: I0128 15:17:45.862710 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:45 crc kubenswrapper[4871]: I0128 15:17:45.862777 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:45 crc kubenswrapper[4871]: I0128 15:17:45.862834 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:45 crc kubenswrapper[4871]: I0128 15:17:45.869458 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:17:46 crc kubenswrapper[4871]: I0128 15:17:46.053456 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:46 crc kubenswrapper[4871]: I0128 15:17:46.055055 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:46 crc kubenswrapper[4871]: I0128 15:17:46.055114 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:46 crc kubenswrapper[4871]: I0128 15:17:46.055135 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:46 crc kubenswrapper[4871]: I0128 15:17:46.526184 4871 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 15:17:46 crc kubenswrapper[4871]: I0128 15:17:46.840294 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 17:20:56.603135663 +0000 UTC Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.056758 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.058139 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.058209 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.058229 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.269237 4871 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.384488 4871 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.809700 4871 apiserver.go:52] "Watching apiserver" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.815992 4871 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.816381 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.816941 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.817024 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.816965 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.817215 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:17:47 crc kubenswrapper[4871]: E0128 15:17:47.817297 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.817797 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.817810 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:17:47 crc kubenswrapper[4871]: E0128 15:17:47.817890 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:17:47 crc kubenswrapper[4871]: E0128 15:17:47.818018 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.820773 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.821248 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.821377 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.821547 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.821642 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.821720 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.821922 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.822021 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.827528 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.828522 4871 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.840571 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 11:41:46.480751712 +0000 UTC Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.849134 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.861369 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.873780 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.888900 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.903391 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.905614 4871 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.916769 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:47 crc kubenswrapper[4871]: I0128 15:17:47.927958 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.239304 4871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.243284 4871 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.254212 4871 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.299480 4871 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.319048 4871 csr.go:261] certificate signing request csr-pq6df is approved, waiting to be issued Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.326994 4871 csr.go:257] certificate signing request csr-pq6df is issued Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.343645 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.343686 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.343708 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.343725 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.343743 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.343762 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.343781 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.343796 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.343813 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.343828 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.343843 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.343879 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.343894 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.343910 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.343928 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.343943 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.343974 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.343993 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344008 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344023 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344038 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344058 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344075 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344090 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344119 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344148 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344164 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344140 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344182 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344208 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344231 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344247 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344262 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344280 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344296 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344336 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344372 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344389 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344404 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344401 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344421 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344440 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344459 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344474 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344493 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344508 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344524 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344539 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344555 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344570 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344603 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344625 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344642 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344658 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344673 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344694 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344709 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344724 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344758 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344762 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344773 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344789 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344760 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344834 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344850 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344866 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344882 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344900 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344919 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344937 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344954 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344970 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344970 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344985 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.344974 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345051 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345072 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345092 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345104 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345111 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345127 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345143 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345162 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345179 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345187 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345196 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345213 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345228 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345241 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345245 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345276 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345295 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345313 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345331 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345348 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345363 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345378 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345381 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345395 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345410 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345428 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345444 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345461 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345476 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345493 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345508 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345554 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345569 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345609 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345630 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345648 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345641 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345665 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345721 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345732 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345736 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345811 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345821 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345859 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345916 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345939 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345945 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345960 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345986 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.345992 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.346012 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.346038 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.346059 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.346078 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.346097 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.346115 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.346133 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.346132 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.346328 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.346338 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.346434 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.346641 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.346674 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.346726 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.346851 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.346944 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.346964 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.347072 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.347131 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.347181 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.347182 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.347211 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.347350 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.347356 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.347396 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.347622 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.347661 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.347807 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.347820 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.347848 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.347970 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.347998 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.348040 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.348106 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.348154 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.348230 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.348339 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.348376 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.348383 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.348542 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.348551 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.348557 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.346137 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.348611 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.348714 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.348789 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.349010 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.349528 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.350514 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.350549 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.350572 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.350614 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.350642 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.350666 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.350686 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.350708 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.350730 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.350757 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.350781 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.350808 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.350832 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.350854 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.350865 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.350885 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.350909 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.350934 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.350959 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.350980 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.351003 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.351027 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.351052 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.351074 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.351099 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.351120 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.351143 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.351164 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.351189 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.351213 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.352198 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.352231 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.352250 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.352278 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.352296 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.352315 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.352348 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.352365 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.352410 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.352437 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.352472 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.352500 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.352526 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.352551 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.352692 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.353337 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.353634 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.353650 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.353741 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.353774 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.353803 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.353831 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.354191 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.353833 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.354416 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.354447 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.354628 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.354626 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.354810 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.354943 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.354984 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355008 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355028 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355051 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355073 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355095 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355118 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355139 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355161 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355180 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355197 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355215 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355232 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355252 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355271 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355288 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355307 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355323 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355344 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355515 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355511 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355687 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355727 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355760 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355787 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355813 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355835 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355860 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355884 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355939 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355968 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.355999 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356010 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356029 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356058 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356083 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356102 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356124 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356128 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356141 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356338 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356376 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356401 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356425 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356454 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356520 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356617 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356582 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356694 4871 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356708 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356722 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356735 4871 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356749 4871 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356760 4871 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356773 4871 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356784 4871 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356796 4871 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356808 4871 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356818 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356824 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356863 4871 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356881 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356895 4871 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356909 4871 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356922 4871 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356936 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356948 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356959 4871 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356969 4871 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356978 4871 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.356989 4871 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.357001 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.357011 4871 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.357022 4871 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.357032 4871 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.357041 4871 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.357052 4871 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.357061 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.357071 4871 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.357081 4871 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.357090 4871 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.357100 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.357110 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.357120 4871 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.357438 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.357569 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.357629 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.357702 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.357781 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.358015 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:17:48.857987268 +0000 UTC m=+20.753825580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.357132 4871 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.358047 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.358068 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.358122 4871 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.358144 4871 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.358159 4871 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.358166 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.358175 4871 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.358200 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.358224 4871 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.358355 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.358372 4871 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.360889 4871 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.360944 4871 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.360963 4871 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.360980 4871 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.361022 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.361041 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.361056 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.364977 4871 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.365014 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.358483 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.358515 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.358390 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.360479 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.364525 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.364884 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.365132 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.365182 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.365477 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.365494 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.366477 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.367625 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.370439 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.371052 4871 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.371231 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.372378 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.372879 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.373468 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.373647 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.374228 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.374580 4871 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.374695 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:17:48.874675072 +0000 UTC m=+20.770513394 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.374763 4871 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.374803 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:17:48.874794086 +0000 UTC m=+20.770632408 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.374657 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.374780 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.374979 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.375018 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.375332 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.375354 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.375500 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.374836 4871 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.377077 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.377351 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.378170 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.378295 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.380161 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.380313 4871 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.380920 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.381051 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:17:48.880726298 +0000 UTC m=+20.776564620 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.378382 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.378990 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.379285 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.381200 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.381263 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.380177 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.376329 4871 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.381910 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.382024 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.383429 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.383558 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.382739 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.383261 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.384703 4871 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.384763 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.384784 4871 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.384918 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.385135 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.385531 4871 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.391381 4871 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.391409 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.391553 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.391570 4871 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.391602 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.388756 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.385120 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.385945 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.385958 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.385993 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.386147 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.391676 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.386327 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.386378 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.386792 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.387047 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.387058 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.387625 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.389331 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.389721 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.389925 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.390251 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.390363 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.390455 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.390554 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.390786 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.390810 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.390876 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.390884 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.390930 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.391896 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.391273 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.387143 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.386421 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.391340 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.391476 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.392010 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.392087 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.392023 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.392161 4871 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.392220 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.392248 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:17:48.892223943 +0000 UTC m=+20.788062275 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.392725 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.392800 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.394575 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.395495 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.395759 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.396352 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.396482 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.396937 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.396971 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.397253 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.398655 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.399648 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.402260 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.402324 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.403015 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.403305 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.403343 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.403364 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.403325 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.404631 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.404788 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.404834 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.404628 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.405108 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.405244 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.405746 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.405778 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.405957 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.406012 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.406336 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.406347 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.406581 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.406578 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.406724 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.407113 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.407307 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.408216 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.408416 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.408438 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.408517 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.408575 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.408807 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.408877 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.409088 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.411431 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.427898 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.428919 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.432509 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-q868d"] Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.433127 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-q868d" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.436261 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.436282 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.436527 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.436835 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.443410 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.444449 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.455686 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.472836 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.493680 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494262 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494020 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494358 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494374 4871 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494384 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494394 4871 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494399 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494441 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494454 4871 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494464 4871 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494475 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494485 4871 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494495 4871 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494508 4871 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494518 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494528 4871 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494540 4871 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494550 4871 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494561 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494571 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494582 4871 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494608 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494619 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494630 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494640 4871 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494649 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494658 4871 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494683 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494696 4871 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494706 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494716 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494725 4871 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494734 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494743 4871 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494753 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494762 4871 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494772 4871 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494782 4871 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494791 4871 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494801 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494810 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494819 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494828 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494837 4871 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494848 4871 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494858 4871 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494871 4871 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494880 4871 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494889 4871 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494898 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494907 4871 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494917 4871 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494926 4871 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494936 4871 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494945 4871 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494954 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494963 4871 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494972 4871 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494982 4871 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.494991 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495000 4871 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495011 4871 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495020 4871 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495037 4871 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495049 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495061 4871 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495073 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495082 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495091 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495100 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495109 4871 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495118 4871 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495127 4871 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495136 4871 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495146 4871 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495158 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495167 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495177 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495202 4871 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495212 4871 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495221 4871 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495230 4871 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495239 4871 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495249 4871 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495258 4871 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495267 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495276 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495285 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495294 4871 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495309 4871 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495323 4871 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495335 4871 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495350 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495359 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495368 4871 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495377 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495387 4871 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495397 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495407 4871 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495415 4871 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495424 4871 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495435 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495444 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495453 4871 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495462 4871 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495472 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495483 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495494 4871 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495504 4871 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495514 4871 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495536 4871 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495546 4871 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495556 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495568 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495578 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495617 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495627 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495638 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495647 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495656 4871 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495673 4871 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495683 4871 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495694 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495703 4871 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495712 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495722 4871 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495731 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.495741 4871 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.496536 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.511661 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.530963 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.542908 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.560156 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.596178 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ac7f18c5-9c7c-483b-a476-470975bb1674-hosts-file\") pod \"node-resolver-q868d\" (UID: \"ac7f18c5-9c7c-483b-a476-470975bb1674\") " pod="openshift-dns/node-resolver-q868d" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.596241 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drx5d\" (UniqueName: \"kubernetes.io/projected/ac7f18c5-9c7c-483b-a476-470975bb1674-kube-api-access-drx5d\") pod \"node-resolver-q868d\" (UID: \"ac7f18c5-9c7c-483b-a476-470975bb1674\") " pod="openshift-dns/node-resolver-q868d" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.666759 4871 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 28 15:17:48 crc kubenswrapper[4871]: W0128 15:17:48.667464 4871 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:17:48 crc kubenswrapper[4871]: W0128 15:17:48.667508 4871 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:17:48 crc kubenswrapper[4871]: W0128 15:17:48.667537 4871 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:17:48 crc kubenswrapper[4871]: W0128 15:17:48.667559 4871 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:17:48 crc kubenswrapper[4871]: W0128 15:17:48.667759 4871 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:17:48 crc kubenswrapper[4871]: W0128 15:17:48.667794 4871 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:17:48 crc kubenswrapper[4871]: W0128 15:17:48.667820 4871 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:17:48 crc kubenswrapper[4871]: W0128 15:17:48.667988 4871 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:17:48 crc kubenswrapper[4871]: W0128 15:17:48.668020 4871 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:17:48 crc kubenswrapper[4871]: W0128 15:17:48.668045 4871 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:17:48 crc kubenswrapper[4871]: W0128 15:17:48.668064 4871 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:17:48 crc kubenswrapper[4871]: W0128 15:17:48.668230 4871 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.668275 4871 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd/events\": read tcp 38.102.83.199:33896->38.102.83.199:6443: use of closed network connection" event="&Event{ObjectMeta:{etcd-crc.188eee0f71c3e3de openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:17:33.703767006 +0000 UTC m=+5.599605338,LastTimestamp:2026-01-28 15:17:33.703767006 +0000 UTC m=+5.599605338,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:17:48 crc kubenswrapper[4871]: W0128 15:17:48.668410 4871 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.697817 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drx5d\" (UniqueName: \"kubernetes.io/projected/ac7f18c5-9c7c-483b-a476-470975bb1674-kube-api-access-drx5d\") pod \"node-resolver-q868d\" (UID: \"ac7f18c5-9c7c-483b-a476-470975bb1674\") " pod="openshift-dns/node-resolver-q868d" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.697886 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ac7f18c5-9c7c-483b-a476-470975bb1674-hosts-file\") pod \"node-resolver-q868d\" (UID: \"ac7f18c5-9c7c-483b-a476-470975bb1674\") " pod="openshift-dns/node-resolver-q868d" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.698339 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ac7f18c5-9c7c-483b-a476-470975bb1674-hosts-file\") pod \"node-resolver-q868d\" (UID: \"ac7f18c5-9c7c-483b-a476-470975bb1674\") " pod="openshift-dns/node-resolver-q868d" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.716122 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drx5d\" (UniqueName: \"kubernetes.io/projected/ac7f18c5-9c7c-483b-a476-470975bb1674-kube-api-access-drx5d\") pod \"node-resolver-q868d\" (UID: \"ac7f18c5-9c7c-483b-a476-470975bb1674\") " pod="openshift-dns/node-resolver-q868d" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.737027 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.745127 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-q868d" Jan 28 15:17:48 crc kubenswrapper[4871]: W0128 15:17:48.748631 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-a92dfe3202268c8bc20493824dc60431c5156c8a3c2a96dd376297b27912ff0e WatchSource:0}: Error finding container a92dfe3202268c8bc20493824dc60431c5156c8a3c2a96dd376297b27912ff0e: Status 404 returned error can't find the container with id a92dfe3202268c8bc20493824dc60431c5156c8a3c2a96dd376297b27912ff0e Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.752600 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:17:48 crc kubenswrapper[4871]: W0128 15:17:48.757426 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac7f18c5_9c7c_483b_a476_470975bb1674.slice/crio-2332c76859e99a2fd41aea862c898ad836ec76dbb1fe946e3d86b55e0de0e7de WatchSource:0}: Error finding container 2332c76859e99a2fd41aea862c898ad836ec76dbb1fe946e3d86b55e0de0e7de: Status 404 returned error can't find the container with id 2332c76859e99a2fd41aea862c898ad836ec76dbb1fe946e3d86b55e0de0e7de Jan 28 15:17:48 crc kubenswrapper[4871]: W0128 15:17:48.767069 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-b1205746c4340de6bc5405cb9e4911b53523d27f142291849dd8ae73442f9b85 WatchSource:0}: Error finding container b1205746c4340de6bc5405cb9e4911b53523d27f142291849dd8ae73442f9b85: Status 404 returned error can't find the container with id b1205746c4340de6bc5405cb9e4911b53523d27f142291849dd8ae73442f9b85 Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.841556 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 08:03:10.58687901 +0000 UTC Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.899880 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.899969 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.899997 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.900020 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.900048 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.900167 4871 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.900234 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:17:49.900214878 +0000 UTC m=+21.796053200 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.900243 4871 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.900278 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.900303 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.900316 4871 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.900372 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:17:49.900337641 +0000 UTC m=+21.796176003 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.900402 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:17:49.900388233 +0000 UTC m=+21.796226585 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.900401 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.900437 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.900458 4871 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.900532 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:17:49.900477126 +0000 UTC m=+21.796315458 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.900626 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:17:49.90061056 +0000 UTC m=+21.796449072 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.903790 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:17:48 crc kubenswrapper[4871]: E0128 15:17:48.903924 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.910048 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.912027 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.913696 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.915961 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.917229 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.917448 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.919793 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.921779 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.926346 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.929299 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.930919 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.932341 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.932550 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.935789 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.937896 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.939860 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.940838 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.941656 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.942996 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.943550 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.953401 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.954180 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.955485 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.956311 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.956970 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.958364 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.958983 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.960536 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.961452 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.966419 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.966480 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.967433 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.968817 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.969317 4871 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.969427 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.972496 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.973020 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.974553 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.977093 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.978036 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.979229 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.979968 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.981628 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.982157 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.983101 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.984294 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.986301 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.989406 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.990340 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.990910 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.993172 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 28 15:17:48 crc kubenswrapper[4871]: I0128 15:17:48.995773 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.016951 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.018080 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.018816 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.020243 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.020973 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.021068 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.027014 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.028190 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.043070 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.052942 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.059821 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.065488 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-q868d" event={"ID":"ac7f18c5-9c7c-483b-a476-470975bb1674","Type":"ContainerStarted","Data":"30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7"} Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.065574 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-q868d" event={"ID":"ac7f18c5-9c7c-483b-a476-470975bb1674","Type":"ContainerStarted","Data":"2332c76859e99a2fd41aea862c898ad836ec76dbb1fe946e3d86b55e0de0e7de"} Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.069350 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9"} Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.069623 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23"} Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.069644 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7e2b5d91772ec7b9fbe319f7553b9d891f09b812741890f6dce137ca66871ce8"} Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.071142 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b1205746c4340de6bc5405cb9e4911b53523d27f142291849dd8ae73442f9b85"} Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.072329 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613"} Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.072356 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a92dfe3202268c8bc20493824dc60431c5156c8a3c2a96dd376297b27912ff0e"} Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.082822 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.107939 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.120553 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.122856 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.138951 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.148757 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.161322 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.172678 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.184326 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.195318 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.205255 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.221169 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.235365 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.248957 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.262500 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.286868 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.327854 4871 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-28 15:12:48 +0000 UTC, rotation deadline is 2026-10-13 11:01:34.050488302 +0000 UTC Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.327931 4871 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6187h43m44.722560342s for next certificate rotation Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.512411 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.646358 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.715319 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.797207 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.842276 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 03:29:38.944586906 +0000 UTC Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.842376 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.848142 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.862678 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-7tkqm"] Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.863232 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.868786 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.869877 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.870165 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.870629 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.870844 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.895251 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.903259 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.903309 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:17:49 crc kubenswrapper[4871]: E0128 15:17:49.903409 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:17:49 crc kubenswrapper[4871]: E0128 15:17:49.903596 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.907883 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.907994 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.908033 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:17:49 crc kubenswrapper[4871]: E0128 15:17:49.908063 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:17:51.908032727 +0000 UTC m=+23.803871039 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.908104 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.908165 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:17:49 crc kubenswrapper[4871]: E0128 15:17:49.908187 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:17:49 crc kubenswrapper[4871]: E0128 15:17:49.908210 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:17:49 crc kubenswrapper[4871]: E0128 15:17:49.908223 4871 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:17:49 crc kubenswrapper[4871]: E0128 15:17:49.908280 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:17:51.908262134 +0000 UTC m=+23.804100456 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:17:49 crc kubenswrapper[4871]: E0128 15:17:49.908287 4871 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:17:49 crc kubenswrapper[4871]: E0128 15:17:49.908329 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:17:51.908319436 +0000 UTC m=+23.804157758 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:17:49 crc kubenswrapper[4871]: E0128 15:17:49.908349 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:17:49 crc kubenswrapper[4871]: E0128 15:17:49.908363 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:17:49 crc kubenswrapper[4871]: E0128 15:17:49.908374 4871 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:17:49 crc kubenswrapper[4871]: E0128 15:17:49.908402 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:17:51.908393678 +0000 UTC m=+23.804232230 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:17:49 crc kubenswrapper[4871]: E0128 15:17:49.908428 4871 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:17:49 crc kubenswrapper[4871]: E0128 15:17:49.908479 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:17:51.908471061 +0000 UTC m=+23.804309383 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.912044 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.916573 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.925289 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.931027 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.939925 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.954711 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.975058 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:49 crc kubenswrapper[4871]: I0128 15:17:49.987375 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.001140 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.009190 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f-mcd-auth-proxy-config\") pod \"machine-config-daemon-7tkqm\" (UID: \"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\") " pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.009230 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f-rootfs\") pod \"machine-config-daemon-7tkqm\" (UID: \"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\") " pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.009263 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f-proxy-tls\") pod \"machine-config-daemon-7tkqm\" (UID: \"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\") " pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.009298 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9bdc\" (UniqueName: \"kubernetes.io/projected/25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f-kube-api-access-p9bdc\") pod \"machine-config-daemon-7tkqm\" (UID: \"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\") " pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.012708 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.023909 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.038399 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.039308 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.056929 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.058529 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.070366 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.076808 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.077182 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.078400 4871 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb" exitCode=255 Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.078989 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb"} Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.079030 4871 scope.go:117] "RemoveContainer" containerID="cd62c3a8dcc49414acdbe9951aa7893f4f0be579f9496397ab2cd417ce2e5b6a" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.092019 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.093270 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.093525 4871 scope.go:117] "RemoveContainer" containerID="e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb" Jan 28 15:17:50 crc kubenswrapper[4871]: E0128 15:17:50.093759 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.105696 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.108028 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.110012 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f-mcd-auth-proxy-config\") pod \"machine-config-daemon-7tkqm\" (UID: \"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\") " pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.110045 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f-rootfs\") pod \"machine-config-daemon-7tkqm\" (UID: \"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\") " pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.110079 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f-proxy-tls\") pod \"machine-config-daemon-7tkqm\" (UID: \"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\") " pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.110105 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9bdc\" (UniqueName: \"kubernetes.io/projected/25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f-kube-api-access-p9bdc\") pod \"machine-config-daemon-7tkqm\" (UID: \"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\") " pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.110908 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f-mcd-auth-proxy-config\") pod \"machine-config-daemon-7tkqm\" (UID: \"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\") " pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.110956 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f-rootfs\") pod \"machine-config-daemon-7tkqm\" (UID: \"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\") " pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.114111 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f-proxy-tls\") pod \"machine-config-daemon-7tkqm\" (UID: \"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\") " pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.121389 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.124220 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.133953 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9bdc\" (UniqueName: \"kubernetes.io/projected/25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f-kube-api-access-p9bdc\") pod \"machine-config-daemon-7tkqm\" (UID: \"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\") " pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.138410 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.152060 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.152492 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.166506 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.178441 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.190213 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: W0128 15:17:50.190418 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25fcadbe_ed48_4ec0_a8e2_5e2d5f0eed0f.slice/crio-6cc4142182f809ed7e088ac883bd2030820d5e1b514bef1b52726504f78c4f07 WatchSource:0}: Error finding container 6cc4142182f809ed7e088ac883bd2030820d5e1b514bef1b52726504f78c4f07: Status 404 returned error can't find the container with id 6cc4142182f809ed7e088ac883bd2030820d5e1b514bef1b52726504f78c4f07 Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.207400 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.215420 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.221719 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.235143 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.253315 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.280925 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.294169 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd62c3a8dcc49414acdbe9951aa7893f4f0be579f9496397ab2cd417ce2e5b6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:42Z\\\",\\\"message\\\":\\\"W0128 15:17:32.141043 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 15:17:32.141391 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769613452 cert, and key in /tmp/serving-cert-25899175/serving-signer.crt, /tmp/serving-cert-25899175/serving-signer.key\\\\nI0128 15:17:32.345312 1 observer_polling.go:159] Starting file observer\\\\nW0128 15:17:32.348201 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:17:32.348446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:17:32.349507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-25899175/tls.crt::/tmp/serving-cert-25899175/tls.key\\\\\\\"\\\\nF0128 15:17:42.925810 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.304681 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fn5bb"] Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.305837 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-rz9nh"] Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.306046 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.307196 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-45mlg"] Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.307308 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.307404 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.311896 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.312049 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.313484 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.313706 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.313810 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.313867 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.314124 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.314294 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.314399 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.314465 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.314634 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.314718 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.314832 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.317833 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.319861 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.337009 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.351456 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.372697 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.385071 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.398692 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.413453 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.413636 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-host-var-lib-cni-bin\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.413680 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-slash\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.413713 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1fe589be-c3d0-406c-9112-2fed7909283c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rz9nh\" (UID: \"1fe589be-c3d0-406c-9112-2fed7909283c\") " pod="openshift-multus/multus-additional-cni-plugins-rz9nh" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.413741 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-host-run-k8s-cni-cncf-io\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.413796 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-hostroot\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.413873 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-cni-bin\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.413901 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/178343c8-b657-4440-953e-6daef3609145-env-overrides\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.413925 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/178343c8-b657-4440-953e-6daef3609145-ovnkube-script-lib\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414006 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1fe589be-c3d0-406c-9112-2fed7909283c-cnibin\") pod \"multus-additional-cni-plugins-rz9nh\" (UID: \"1fe589be-c3d0-406c-9112-2fed7909283c\") " pod="openshift-multus/multus-additional-cni-plugins-rz9nh" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414040 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1fe589be-c3d0-406c-9112-2fed7909283c-os-release\") pod \"multus-additional-cni-plugins-rz9nh\" (UID: \"1fe589be-c3d0-406c-9112-2fed7909283c\") " pod="openshift-multus/multus-additional-cni-plugins-rz9nh" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414060 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-host-var-lib-kubelet\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414089 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-log-socket\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414110 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414165 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d1955ba7-b91c-41de-97b7-188922cc0907-multus-daemon-config\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414204 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-host-var-lib-cni-multus\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414230 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmjgv\" (UniqueName: \"kubernetes.io/projected/d1955ba7-b91c-41de-97b7-188922cc0907-kube-api-access-mmjgv\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414272 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngvnx\" (UniqueName: \"kubernetes.io/projected/1fe589be-c3d0-406c-9112-2fed7909283c-kube-api-access-ngvnx\") pod \"multus-additional-cni-plugins-rz9nh\" (UID: \"1fe589be-c3d0-406c-9112-2fed7909283c\") " pod="openshift-multus/multus-additional-cni-plugins-rz9nh" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414304 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-system-cni-dir\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414327 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d1955ba7-b91c-41de-97b7-188922cc0907-cni-binary-copy\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414350 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-var-lib-openvswitch\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414389 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-cni-netd\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414412 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtss7\" (UniqueName: \"kubernetes.io/projected/178343c8-b657-4440-953e-6daef3609145-kube-api-access-rtss7\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414459 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1fe589be-c3d0-406c-9112-2fed7909283c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rz9nh\" (UID: \"1fe589be-c3d0-406c-9112-2fed7909283c\") " pod="openshift-multus/multus-additional-cni-plugins-rz9nh" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414480 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/178343c8-b657-4440-953e-6daef3609145-ovnkube-config\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414512 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/178343c8-b657-4440-953e-6daef3609145-ovn-node-metrics-cert\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414538 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-multus-cni-dir\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414559 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-multus-conf-dir\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414613 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-etc-kubernetes\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414637 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-run-systemd\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414660 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-host-run-netns\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414684 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-systemd-units\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414743 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-etc-openvswitch\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414792 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-kubelet\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414822 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-run-openvswitch\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414857 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-node-log\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414897 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-multus-socket-dir-parent\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.414954 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-cnibin\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.415006 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-os-release\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.415032 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-run-netns\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.415067 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-run-ovn\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.415123 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1fe589be-c3d0-406c-9112-2fed7909283c-system-cni-dir\") pod \"multus-additional-cni-plugins-rz9nh\" (UID: \"1fe589be-c3d0-406c-9112-2fed7909283c\") " pod="openshift-multus/multus-additional-cni-plugins-rz9nh" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.415151 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1fe589be-c3d0-406c-9112-2fed7909283c-cni-binary-copy\") pod \"multus-additional-cni-plugins-rz9nh\" (UID: \"1fe589be-c3d0-406c-9112-2fed7909283c\") " pod="openshift-multus/multus-additional-cni-plugins-rz9nh" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.415173 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-host-run-multus-certs\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.415198 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-run-ovn-kubernetes\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.429547 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.444119 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd62c3a8dcc49414acdbe9951aa7893f4f0be579f9496397ab2cd417ce2e5b6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:42Z\\\",\\\"message\\\":\\\"W0128 15:17:32.141043 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 15:17:32.141391 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769613452 cert, and key in /tmp/serving-cert-25899175/serving-signer.crt, /tmp/serving-cert-25899175/serving-signer.key\\\\nI0128 15:17:32.345312 1 observer_polling.go:159] Starting file observer\\\\nW0128 15:17:32.348201 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:17:32.348446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:17:32.349507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-25899175/tls.crt::/tmp/serving-cert-25899175/tls.key\\\\\\\"\\\\nF0128 15:17:42.925810 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.457400 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.467806 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.484901 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.499024 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.512435 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.515668 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/178343c8-b657-4440-953e-6daef3609145-ovn-node-metrics-cert\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.515781 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-run-systemd\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.515862 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-multus-cni-dir\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.515941 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-multus-conf-dir\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.515877 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-run-systemd\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.516074 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-multus-conf-dir\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.516120 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-multus-cni-dir\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.516189 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-etc-kubernetes\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.516085 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-etc-kubernetes\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.516323 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-host-run-netns\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.516349 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-systemd-units\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.516405 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-systemd-units\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.516471 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-etc-openvswitch\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.516494 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-multus-socket-dir-parent\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.516535 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-etc-openvswitch\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.516558 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-multus-socket-dir-parent\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.516509 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-kubelet\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.516606 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-kubelet\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.516663 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-run-openvswitch\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.516693 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-run-openvswitch\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.516718 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-node-log\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.516747 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-cnibin\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.516866 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-host-run-netns\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.516888 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-cnibin\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.516815 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-os-release\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517095 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-os-release\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.516890 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-node-log\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517203 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-run-netns\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517280 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-run-ovn\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517343 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-run-ovn-kubernetes\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517225 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-run-netns\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517414 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1fe589be-c3d0-406c-9112-2fed7909283c-system-cni-dir\") pod \"multus-additional-cni-plugins-rz9nh\" (UID: \"1fe589be-c3d0-406c-9112-2fed7909283c\") " pod="openshift-multus/multus-additional-cni-plugins-rz9nh" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517450 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1fe589be-c3d0-406c-9112-2fed7909283c-cni-binary-copy\") pod \"multus-additional-cni-plugins-rz9nh\" (UID: \"1fe589be-c3d0-406c-9112-2fed7909283c\") " pod="openshift-multus/multus-additional-cni-plugins-rz9nh" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517461 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-run-ovn\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517471 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-host-run-multus-certs\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517512 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-host-var-lib-cni-bin\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517539 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-slash\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517539 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-run-ovn-kubernetes\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517561 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1fe589be-c3d0-406c-9112-2fed7909283c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rz9nh\" (UID: \"1fe589be-c3d0-406c-9112-2fed7909283c\") " pod="openshift-multus/multus-additional-cni-plugins-rz9nh" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517621 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-host-run-k8s-cni-cncf-io\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517642 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-hostroot\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517664 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-cni-bin\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517680 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/178343c8-b657-4440-953e-6daef3609145-env-overrides\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517694 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/178343c8-b657-4440-953e-6daef3609145-ovnkube-script-lib\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517712 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517734 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1fe589be-c3d0-406c-9112-2fed7909283c-cnibin\") pod \"multus-additional-cni-plugins-rz9nh\" (UID: \"1fe589be-c3d0-406c-9112-2fed7909283c\") " pod="openshift-multus/multus-additional-cni-plugins-rz9nh" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517750 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1fe589be-c3d0-406c-9112-2fed7909283c-os-release\") pod \"multus-additional-cni-plugins-rz9nh\" (UID: \"1fe589be-c3d0-406c-9112-2fed7909283c\") " pod="openshift-multus/multus-additional-cni-plugins-rz9nh" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517765 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-host-var-lib-kubelet\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517782 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-log-socket\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517802 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d1955ba7-b91c-41de-97b7-188922cc0907-multus-daemon-config\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517822 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-host-var-lib-cni-multus\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517839 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmjgv\" (UniqueName: \"kubernetes.io/projected/d1955ba7-b91c-41de-97b7-188922cc0907-kube-api-access-mmjgv\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517873 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngvnx\" (UniqueName: \"kubernetes.io/projected/1fe589be-c3d0-406c-9112-2fed7909283c-kube-api-access-ngvnx\") pod \"multus-additional-cni-plugins-rz9nh\" (UID: \"1fe589be-c3d0-406c-9112-2fed7909283c\") " pod="openshift-multus/multus-additional-cni-plugins-rz9nh" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517901 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-system-cni-dir\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517916 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d1955ba7-b91c-41de-97b7-188922cc0907-cni-binary-copy\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517931 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-var-lib-openvswitch\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517952 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-cni-netd\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517968 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtss7\" (UniqueName: \"kubernetes.io/projected/178343c8-b657-4440-953e-6daef3609145-kube-api-access-rtss7\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.517999 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1fe589be-c3d0-406c-9112-2fed7909283c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rz9nh\" (UID: \"1fe589be-c3d0-406c-9112-2fed7909283c\") " pod="openshift-multus/multus-additional-cni-plugins-rz9nh" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.518000 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-slash\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.518046 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1fe589be-c3d0-406c-9112-2fed7909283c-cni-binary-copy\") pod \"multus-additional-cni-plugins-rz9nh\" (UID: \"1fe589be-c3d0-406c-9112-2fed7909283c\") " pod="openshift-multus/multus-additional-cni-plugins-rz9nh" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.518064 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1fe589be-c3d0-406c-9112-2fed7909283c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rz9nh\" (UID: \"1fe589be-c3d0-406c-9112-2fed7909283c\") " pod="openshift-multus/multus-additional-cni-plugins-rz9nh" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.518071 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-host-run-k8s-cni-cncf-io\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.518112 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.518115 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-hostroot\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.518050 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-host-run-multus-certs\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.518147 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-cni-bin\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.518149 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-host-var-lib-cni-multus\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.518513 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1fe589be-c3d0-406c-9112-2fed7909283c-system-cni-dir\") pod \"multus-additional-cni-plugins-rz9nh\" (UID: \"1fe589be-c3d0-406c-9112-2fed7909283c\") " pod="openshift-multus/multus-additional-cni-plugins-rz9nh" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.518570 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1fe589be-c3d0-406c-9112-2fed7909283c-cnibin\") pod \"multus-additional-cni-plugins-rz9nh\" (UID: \"1fe589be-c3d0-406c-9112-2fed7909283c\") " pod="openshift-multus/multus-additional-cni-plugins-rz9nh" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.518579 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-system-cni-dir\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.518632 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1fe589be-c3d0-406c-9112-2fed7909283c-os-release\") pod \"multus-additional-cni-plugins-rz9nh\" (UID: \"1fe589be-c3d0-406c-9112-2fed7909283c\") " pod="openshift-multus/multus-additional-cni-plugins-rz9nh" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.518660 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-host-var-lib-kubelet\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.518681 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-log-socket\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.518545 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d1955ba7-b91c-41de-97b7-188922cc0907-multus-daemon-config\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.518798 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/178343c8-b657-4440-953e-6daef3609145-env-overrides\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.518920 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-var-lib-openvswitch\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.518951 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/178343c8-b657-4440-953e-6daef3609145-ovnkube-config\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.518017 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/178343c8-b657-4440-953e-6daef3609145-ovnkube-config\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.518926 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-cni-netd\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.519116 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d1955ba7-b91c-41de-97b7-188922cc0907-cni-binary-copy\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.519270 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/178343c8-b657-4440-953e-6daef3609145-ovnkube-script-lib\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.519338 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d1955ba7-b91c-41de-97b7-188922cc0907-host-var-lib-cni-bin\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.519505 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1fe589be-c3d0-406c-9112-2fed7909283c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rz9nh\" (UID: \"1fe589be-c3d0-406c-9112-2fed7909283c\") " pod="openshift-multus/multus-additional-cni-plugins-rz9nh" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.520509 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/178343c8-b657-4440-953e-6daef3609145-ovn-node-metrics-cert\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.525572 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.535844 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngvnx\" (UniqueName: \"kubernetes.io/projected/1fe589be-c3d0-406c-9112-2fed7909283c-kube-api-access-ngvnx\") pod \"multus-additional-cni-plugins-rz9nh\" (UID: \"1fe589be-c3d0-406c-9112-2fed7909283c\") " pod="openshift-multus/multus-additional-cni-plugins-rz9nh" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.538855 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.539761 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmjgv\" (UniqueName: \"kubernetes.io/projected/d1955ba7-b91c-41de-97b7-188922cc0907-kube-api-access-mmjgv\") pod \"multus-45mlg\" (UID: \"d1955ba7-b91c-41de-97b7-188922cc0907\") " pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.541581 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtss7\" (UniqueName: \"kubernetes.io/projected/178343c8-b657-4440-953e-6daef3609145-kube-api-access-rtss7\") pod \"ovnkube-node-fn5bb\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.563650 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.579608 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.596374 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.642868 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.651668 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.672875 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-45mlg" Jan 28 15:17:50 crc kubenswrapper[4871]: W0128 15:17:50.701085 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1955ba7_b91c_41de_97b7_188922cc0907.slice/crio-2d60c45d7d48fa28569b36cc4f46ea065099b4f4b7752b0e89c27397fb32d132 WatchSource:0}: Error finding container 2d60c45d7d48fa28569b36cc4f46ea065099b4f4b7752b0e89c27397fb32d132: Status 404 returned error can't find the container with id 2d60c45d7d48fa28569b36cc4f46ea065099b4f4b7752b0e89c27397fb32d132 Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.843198 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 11:57:19.534806344 +0000 UTC Jan 28 15:17:50 crc kubenswrapper[4871]: I0128 15:17:50.903718 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:17:50 crc kubenswrapper[4871]: E0128 15:17:50.903835 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.083441 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-45mlg" event={"ID":"d1955ba7-b91c-41de-97b7-188922cc0907","Type":"ContainerStarted","Data":"7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb"} Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.083523 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-45mlg" event={"ID":"d1955ba7-b91c-41de-97b7-188922cc0907","Type":"ContainerStarted","Data":"2d60c45d7d48fa28569b36cc4f46ea065099b4f4b7752b0e89c27397fb32d132"} Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.085305 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" event={"ID":"1fe589be-c3d0-406c-9112-2fed7909283c","Type":"ContainerStarted","Data":"7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40"} Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.085520 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" event={"ID":"1fe589be-c3d0-406c-9112-2fed7909283c","Type":"ContainerStarted","Data":"c9592f739769c6c10b992085e5a561f3c5a19fa112053e62d688430e0267e2d0"} Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.086561 4871 generic.go:334] "Generic (PLEG): container finished" podID="178343c8-b657-4440-953e-6daef3609145" containerID="411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a" exitCode=0 Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.086694 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerDied","Data":"411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a"} Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.086786 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerStarted","Data":"39fb03a8f95777e5196c225b31c659159448fed380393e101192906a41713bd5"} Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.089274 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerStarted","Data":"e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0"} Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.089322 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerStarted","Data":"bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea"} Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.089336 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerStarted","Data":"6cc4142182f809ed7e088ac883bd2030820d5e1b514bef1b52726504f78c4f07"} Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.091788 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.095685 4871 scope.go:117] "RemoveContainer" containerID="e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb" Jan 28 15:17:51 crc kubenswrapper[4871]: E0128 15:17:51.095819 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.098007 4871 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.111666 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.125577 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.142250 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.155939 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.169948 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.183957 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.198904 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.215910 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd62c3a8dcc49414acdbe9951aa7893f4f0be579f9496397ab2cd417ce2e5b6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:42Z\\\",\\\"message\\\":\\\"W0128 15:17:32.141043 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 15:17:32.141391 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769613452 cert, and key in /tmp/serving-cert-25899175/serving-signer.crt, /tmp/serving-cert-25899175/serving-signer.key\\\\nI0128 15:17:32.345312 1 observer_polling.go:159] Starting file observer\\\\nW0128 15:17:32.348201 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:17:32.348446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:17:32.349507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-25899175/tls.crt::/tmp/serving-cert-25899175/tls.key\\\\\\\"\\\\nF0128 15:17:42.925810 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.236212 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.254703 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.273689 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.286626 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.296860 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.316349 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.331689 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.350133 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.368252 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.380275 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.394326 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.416911 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.438354 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.457946 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.473575 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.488576 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.515127 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.529639 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.544122 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.558494 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.843982 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 01:31:34.162367169 +0000 UTC Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.902819 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.902819 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:17:51 crc kubenswrapper[4871]: E0128 15:17:51.902985 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:17:51 crc kubenswrapper[4871]: E0128 15:17:51.902921 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.936550 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.936659 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:17:51 crc kubenswrapper[4871]: E0128 15:17:51.936733 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:17:55.936706077 +0000 UTC m=+27.832544399 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:17:51 crc kubenswrapper[4871]: E0128 15:17:51.936746 4871 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.936807 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:17:51 crc kubenswrapper[4871]: E0128 15:17:51.936855 4871 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:17:51 crc kubenswrapper[4871]: E0128 15:17:51.936881 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:17:55.936840451 +0000 UTC m=+27.832678773 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.936975 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:17:51 crc kubenswrapper[4871]: I0128 15:17:51.937057 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:17:51 crc kubenswrapper[4871]: E0128 15:17:51.937283 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:17:55.937263574 +0000 UTC m=+27.833101916 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:17:51 crc kubenswrapper[4871]: E0128 15:17:51.937310 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:17:51 crc kubenswrapper[4871]: E0128 15:17:51.937326 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:17:51 crc kubenswrapper[4871]: E0128 15:17:51.937335 4871 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:17:51 crc kubenswrapper[4871]: E0128 15:17:51.937372 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:17:55.937360087 +0000 UTC m=+27.833198409 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:17:51 crc kubenswrapper[4871]: E0128 15:17:51.937423 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:17:51 crc kubenswrapper[4871]: E0128 15:17:51.937452 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:17:51 crc kubenswrapper[4871]: E0128 15:17:51.937475 4871 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:17:51 crc kubenswrapper[4871]: E0128 15:17:51.937534 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:17:55.937525872 +0000 UTC m=+27.833364194 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.100456 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759"} Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.102982 4871 generic.go:334] "Generic (PLEG): container finished" podID="1fe589be-c3d0-406c-9112-2fed7909283c" containerID="7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40" exitCode=0 Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.103387 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" event={"ID":"1fe589be-c3d0-406c-9112-2fed7909283c","Type":"ContainerDied","Data":"7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40"} Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.109911 4871 scope.go:117] "RemoveContainer" containerID="e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb" Jan 28 15:17:52 crc kubenswrapper[4871]: E0128 15:17:52.110066 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.110099 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerStarted","Data":"9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca"} Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.110128 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerStarted","Data":"aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655"} Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.110141 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerStarted","Data":"7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110"} Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.110153 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerStarted","Data":"b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f"} Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.114971 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.127722 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.140616 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.161752 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.179024 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.200545 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.215627 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.235349 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.272964 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.302431 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.322282 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.334751 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.357253 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.374834 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.397479 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.412087 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.424552 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.434652 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.446881 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.457939 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.474800 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.490794 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.507130 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.525788 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.542097 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.558433 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.573998 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.595272 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.845241 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 13:44:24.045969955 +0000 UTC Jan 28 15:17:52 crc kubenswrapper[4871]: I0128 15:17:52.903362 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:17:52 crc kubenswrapper[4871]: E0128 15:17:52.903562 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.115620 4871 generic.go:334] "Generic (PLEG): container finished" podID="1fe589be-c3d0-406c-9112-2fed7909283c" containerID="426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741" exitCode=0 Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.115796 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" event={"ID":"1fe589be-c3d0-406c-9112-2fed7909283c","Type":"ContainerDied","Data":"426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741"} Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.120379 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerStarted","Data":"34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2"} Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.120477 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerStarted","Data":"b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f"} Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.136377 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.160537 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.173555 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.187780 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.202702 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.217681 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.242962 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.258619 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.280261 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.293215 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.306358 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.321630 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.337322 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.353147 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.711499 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-pqb64"] Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.711935 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pqb64" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.713681 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.714268 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.714532 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.715303 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.730090 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.747503 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.763362 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.778285 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.793360 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.801691 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.812783 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.824963 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.833575 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.845994 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 03:31:38.806117368 +0000 UTC Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.850886 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.858315 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77475\" (UniqueName: \"kubernetes.io/projected/4db608d9-2e01-48ed-9a1c-ccedc49f414e-kube-api-access-77475\") pod \"node-ca-pqb64\" (UID: \"4db608d9-2e01-48ed-9a1c-ccedc49f414e\") " pod="openshift-image-registry/node-ca-pqb64" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.858355 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4db608d9-2e01-48ed-9a1c-ccedc49f414e-host\") pod \"node-ca-pqb64\" (UID: \"4db608d9-2e01-48ed-9a1c-ccedc49f414e\") " pod="openshift-image-registry/node-ca-pqb64" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.858384 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4db608d9-2e01-48ed-9a1c-ccedc49f414e-serviceca\") pod \"node-ca-pqb64\" (UID: \"4db608d9-2e01-48ed-9a1c-ccedc49f414e\") " pod="openshift-image-registry/node-ca-pqb64" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.877709 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.891462 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.902573 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.902925 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.902958 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:17:53 crc kubenswrapper[4871]: E0128 15:17:53.903039 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:17:53 crc kubenswrapper[4871]: E0128 15:17:53.903147 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.912222 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.921812 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.959068 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77475\" (UniqueName: \"kubernetes.io/projected/4db608d9-2e01-48ed-9a1c-ccedc49f414e-kube-api-access-77475\") pod \"node-ca-pqb64\" (UID: \"4db608d9-2e01-48ed-9a1c-ccedc49f414e\") " pod="openshift-image-registry/node-ca-pqb64" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.959114 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4db608d9-2e01-48ed-9a1c-ccedc49f414e-host\") pod \"node-ca-pqb64\" (UID: \"4db608d9-2e01-48ed-9a1c-ccedc49f414e\") " pod="openshift-image-registry/node-ca-pqb64" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.959141 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4db608d9-2e01-48ed-9a1c-ccedc49f414e-serviceca\") pod \"node-ca-pqb64\" (UID: \"4db608d9-2e01-48ed-9a1c-ccedc49f414e\") " pod="openshift-image-registry/node-ca-pqb64" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.959185 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4db608d9-2e01-48ed-9a1c-ccedc49f414e-host\") pod \"node-ca-pqb64\" (UID: \"4db608d9-2e01-48ed-9a1c-ccedc49f414e\") " pod="openshift-image-registry/node-ca-pqb64" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.960137 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4db608d9-2e01-48ed-9a1c-ccedc49f414e-serviceca\") pod \"node-ca-pqb64\" (UID: \"4db608d9-2e01-48ed-9a1c-ccedc49f414e\") " pod="openshift-image-registry/node-ca-pqb64" Jan 28 15:17:53 crc kubenswrapper[4871]: I0128 15:17:53.975319 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77475\" (UniqueName: \"kubernetes.io/projected/4db608d9-2e01-48ed-9a1c-ccedc49f414e-kube-api-access-77475\") pod \"node-ca-pqb64\" (UID: \"4db608d9-2e01-48ed-9a1c-ccedc49f414e\") " pod="openshift-image-registry/node-ca-pqb64" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.031733 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pqb64" Jan 28 15:17:54 crc kubenswrapper[4871]: W0128 15:17:54.042611 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4db608d9_2e01_48ed_9a1c_ccedc49f414e.slice/crio-a23a0a48650e8e542c312dd3344117170f653e686f1ca4b49e8a30eb29450fc7 WatchSource:0}: Error finding container a23a0a48650e8e542c312dd3344117170f653e686f1ca4b49e8a30eb29450fc7: Status 404 returned error can't find the container with id a23a0a48650e8e542c312dd3344117170f653e686f1ca4b49e8a30eb29450fc7 Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.125548 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pqb64" event={"ID":"4db608d9-2e01-48ed-9a1c-ccedc49f414e","Type":"ContainerStarted","Data":"a23a0a48650e8e542c312dd3344117170f653e686f1ca4b49e8a30eb29450fc7"} Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.128838 4871 generic.go:334] "Generic (PLEG): container finished" podID="1fe589be-c3d0-406c-9112-2fed7909283c" containerID="195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a" exitCode=0 Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.128911 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" event={"ID":"1fe589be-c3d0-406c-9112-2fed7909283c","Type":"ContainerDied","Data":"195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a"} Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.144940 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.159694 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.174397 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.194463 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.206958 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.221533 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.232885 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.242008 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.261179 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.276612 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.291665 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.306125 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.337098 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.384169 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.416884 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.655052 4871 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.657199 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.657237 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.657247 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.657355 4871 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.667848 4871 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.668292 4871 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.669912 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.669967 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.669981 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.670001 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.670015 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:54Z","lastTransitionTime":"2026-01-28T15:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:54 crc kubenswrapper[4871]: E0128 15:17:54.690100 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.695149 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.695197 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.695213 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.695243 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.695261 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:54Z","lastTransitionTime":"2026-01-28T15:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:54 crc kubenswrapper[4871]: E0128 15:17:54.714419 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.719430 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.719486 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.719499 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.719519 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.719534 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:54Z","lastTransitionTime":"2026-01-28T15:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:54 crc kubenswrapper[4871]: E0128 15:17:54.744714 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.749325 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.749372 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.749384 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.749404 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.749418 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:54Z","lastTransitionTime":"2026-01-28T15:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:54 crc kubenswrapper[4871]: E0128 15:17:54.765573 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.769075 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.769124 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.769143 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.769164 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.769175 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:54Z","lastTransitionTime":"2026-01-28T15:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:54 crc kubenswrapper[4871]: E0128 15:17:54.786832 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:54 crc kubenswrapper[4871]: E0128 15:17:54.786995 4871 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.789361 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.789451 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.789479 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.789518 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.789544 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:54Z","lastTransitionTime":"2026-01-28T15:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.846165 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 17:59:34.199399899 +0000 UTC Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.893316 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.893377 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.893402 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.893438 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.893463 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:54Z","lastTransitionTime":"2026-01-28T15:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.903290 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:17:54 crc kubenswrapper[4871]: E0128 15:17:54.903457 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.997568 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.997687 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.997718 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.997755 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:54 crc kubenswrapper[4871]: I0128 15:17:54.997783 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:54Z","lastTransitionTime":"2026-01-28T15:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.101241 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.101378 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.101398 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.101423 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.101441 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:55Z","lastTransitionTime":"2026-01-28T15:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.144630 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerStarted","Data":"4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039"} Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.146996 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pqb64" event={"ID":"4db608d9-2e01-48ed-9a1c-ccedc49f414e","Type":"ContainerStarted","Data":"d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f"} Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.151842 4871 generic.go:334] "Generic (PLEG): container finished" podID="1fe589be-c3d0-406c-9112-2fed7909283c" containerID="3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5" exitCode=0 Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.151906 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" event={"ID":"1fe589be-c3d0-406c-9112-2fed7909283c","Type":"ContainerDied","Data":"3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5"} Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.184641 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.204534 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.204625 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.204647 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.205015 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.205062 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:55Z","lastTransitionTime":"2026-01-28T15:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.205547 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.222832 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.239164 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.254441 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.266982 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.283328 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.299630 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.307445 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.307505 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.307525 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.307548 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.307614 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:55Z","lastTransitionTime":"2026-01-28T15:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.317177 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.333387 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.342781 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.354322 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.365485 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.378833 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.396974 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.409421 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.409524 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.409566 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.409575 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.409610 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.409621 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:55Z","lastTransitionTime":"2026-01-28T15:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.425497 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.440557 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.456211 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.485377 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.503520 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.512359 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.512402 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.512416 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.512438 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.512453 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:55Z","lastTransitionTime":"2026-01-28T15:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.520468 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.539302 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.562071 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.573452 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.588143 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.601792 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.616210 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.616257 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.616267 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.616285 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.616297 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:55Z","lastTransitionTime":"2026-01-28T15:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.617105 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.629730 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.661810 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.718132 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.718194 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.718213 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.718241 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.718261 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:55Z","lastTransitionTime":"2026-01-28T15:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.820837 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.820906 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.820924 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.820952 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.820970 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:55Z","lastTransitionTime":"2026-01-28T15:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.846698 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 03:36:08.678265236 +0000 UTC Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.903106 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.903245 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:17:55 crc kubenswrapper[4871]: E0128 15:17:55.903373 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:17:55 crc kubenswrapper[4871]: E0128 15:17:55.903529 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.923771 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.923824 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.923841 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.923867 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.923885 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:55Z","lastTransitionTime":"2026-01-28T15:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.979772 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.979944 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.980004 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:17:55 crc kubenswrapper[4871]: E0128 15:17:55.980025 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:18:03.979995675 +0000 UTC m=+35.875834027 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.980069 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:17:55 crc kubenswrapper[4871]: I0128 15:17:55.980145 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:17:55 crc kubenswrapper[4871]: E0128 15:17:55.980251 4871 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:17:55 crc kubenswrapper[4871]: E0128 15:17:55.980355 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:17:55 crc kubenswrapper[4871]: E0128 15:17:55.980357 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:17:55 crc kubenswrapper[4871]: E0128 15:17:55.980382 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:17:55 crc kubenswrapper[4871]: E0128 15:17:55.980398 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:17:55 crc kubenswrapper[4871]: E0128 15:17:55.980407 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:18:03.980370126 +0000 UTC m=+35.876208488 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:17:55 crc kubenswrapper[4871]: E0128 15:17:55.980402 4871 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:17:55 crc kubenswrapper[4871]: E0128 15:17:55.980421 4871 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:17:55 crc kubenswrapper[4871]: E0128 15:17:55.980217 4871 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:17:55 crc kubenswrapper[4871]: E0128 15:17:55.980479 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:18:03.980464569 +0000 UTC m=+35.876302931 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:17:55 crc kubenswrapper[4871]: E0128 15:17:55.980678 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:18:03.980640545 +0000 UTC m=+35.876478907 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:17:55 crc kubenswrapper[4871]: E0128 15:17:55.980782 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:18:03.980713217 +0000 UTC m=+35.876551579 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.026643 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.026710 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.026733 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.026759 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.026776 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:56Z","lastTransitionTime":"2026-01-28T15:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.130701 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.130763 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.130801 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.130825 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.130843 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:56Z","lastTransitionTime":"2026-01-28T15:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.162032 4871 generic.go:334] "Generic (PLEG): container finished" podID="1fe589be-c3d0-406c-9112-2fed7909283c" containerID="97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203" exitCode=0 Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.162530 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" event={"ID":"1fe589be-c3d0-406c-9112-2fed7909283c","Type":"ContainerDied","Data":"97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203"} Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.204970 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.224686 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.234368 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.234424 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.234458 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.234484 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.234503 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:56Z","lastTransitionTime":"2026-01-28T15:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.247685 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.264377 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.279999 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.299649 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.322581 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.337415 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.337478 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.337498 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.337531 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.337553 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:56Z","lastTransitionTime":"2026-01-28T15:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.339950 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.356120 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.372980 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.385059 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.400230 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.420825 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.435889 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.440639 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.440677 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.440690 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.440714 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.440728 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:56Z","lastTransitionTime":"2026-01-28T15:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.487846 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.528616 4871 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.542418 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.542467 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.542481 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.542496 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.542507 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:56Z","lastTransitionTime":"2026-01-28T15:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.645498 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.645550 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.645560 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.645579 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.645606 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:56Z","lastTransitionTime":"2026-01-28T15:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.750298 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.750357 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.750373 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.750394 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.750407 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:56Z","lastTransitionTime":"2026-01-28T15:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.847530 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 00:07:45.1681415 +0000 UTC Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.853859 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.853952 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.853972 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.854000 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.854019 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:56Z","lastTransitionTime":"2026-01-28T15:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.903378 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:17:56 crc kubenswrapper[4871]: E0128 15:17:56.903499 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.956720 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.956774 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.956790 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.956814 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:56 crc kubenswrapper[4871]: I0128 15:17:56.956829 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:56Z","lastTransitionTime":"2026-01-28T15:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.060122 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.060163 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.060172 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.060189 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.060201 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:57Z","lastTransitionTime":"2026-01-28T15:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.163946 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.164030 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.164044 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.164092 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.164106 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:57Z","lastTransitionTime":"2026-01-28T15:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.171639 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerStarted","Data":"8cc566abdc8b82bf93a1bb3f4cd6e3f6df3a4f07dfa695858f55f4cea34fbfbb"} Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.172024 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.175656 4871 generic.go:334] "Generic (PLEG): container finished" podID="1fe589be-c3d0-406c-9112-2fed7909283c" containerID="89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4" exitCode=0 Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.175704 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" event={"ID":"1fe589be-c3d0-406c-9112-2fed7909283c","Type":"ContainerDied","Data":"89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4"} Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.186242 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.205926 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.210714 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.229761 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.246413 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.267582 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.267671 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.267683 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.267703 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.267716 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:57Z","lastTransitionTime":"2026-01-28T15:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.269179 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.282961 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.304068 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.331982 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.348215 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.372764 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.372863 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.372882 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.372909 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.372932 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:57Z","lastTransitionTime":"2026-01-28T15:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.376722 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cc566abdc8b82bf93a1bb3f4cd6e3f6df3a4f07dfa695858f55f4cea34fbfbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.402058 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.418712 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.434737 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.451516 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.464730 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.475388 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.475418 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.475426 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.475439 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.475448 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:57Z","lastTransitionTime":"2026-01-28T15:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.480477 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.492107 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.500715 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.517749 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cc566abdc8b82bf93a1bb3f4cd6e3f6df3a4f07dfa695858f55f4cea34fbfbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.536229 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.550261 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.562104 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.574150 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.577534 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.577618 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.577630 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.577648 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.577660 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:57Z","lastTransitionTime":"2026-01-28T15:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.588628 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.603800 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.617805 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.637291 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.660550 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.680040 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.680077 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.680088 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.680102 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.680112 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:57Z","lastTransitionTime":"2026-01-28T15:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.685165 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.701572 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.783519 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.783564 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.783576 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.783624 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.783639 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:57Z","lastTransitionTime":"2026-01-28T15:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.848821 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 20:17:12.299370011 +0000 UTC Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.887063 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.887117 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.887145 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.887173 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.887194 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:57Z","lastTransitionTime":"2026-01-28T15:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.903257 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.903273 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:17:57 crc kubenswrapper[4871]: E0128 15:17:57.903487 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:17:57 crc kubenswrapper[4871]: E0128 15:17:57.903647 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.990301 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.990368 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.990386 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.990410 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:57 crc kubenswrapper[4871]: I0128 15:17:57.990429 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:57Z","lastTransitionTime":"2026-01-28T15:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.093620 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.093683 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.093701 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.093727 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.093744 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:58Z","lastTransitionTime":"2026-01-28T15:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.128646 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.189838 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" event={"ID":"1fe589be-c3d0-406c-9112-2fed7909283c","Type":"ContainerStarted","Data":"d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0"} Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.190322 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.197150 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.197197 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.197213 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.197238 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.197282 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:58Z","lastTransitionTime":"2026-01-28T15:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.210758 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.223002 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.228375 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.248106 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.263168 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.277175 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.288962 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.299311 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.299364 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.299376 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.299393 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.299404 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:58Z","lastTransitionTime":"2026-01-28T15:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.301034 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.320529 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cc566abdc8b82bf93a1bb3f4cd6e3f6df3a4f07dfa695858f55f4cea34fbfbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.331789 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.343768 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.355424 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.366509 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.384838 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.399281 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.401852 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.401887 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.401900 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.401917 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.401927 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:58Z","lastTransitionTime":"2026-01-28T15:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.412883 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.423563 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.439009 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.453240 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.470385 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.484165 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.497103 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.503715 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.503742 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.503750 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.503763 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.503772 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:58Z","lastTransitionTime":"2026-01-28T15:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.511352 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.524981 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.534609 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.554214 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cc566abdc8b82bf93a1bb3f4cd6e3f6df3a4f07dfa695858f55f4cea34fbfbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.574879 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.589684 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.602109 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.607265 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.607432 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.607568 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.607698 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.607849 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:58Z","lastTransitionTime":"2026-01-28T15:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.633753 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.675198 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.710709 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.710754 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.710768 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.710788 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.710802 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:58Z","lastTransitionTime":"2026-01-28T15:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.813552 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.813631 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.813643 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.813659 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.813670 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:58Z","lastTransitionTime":"2026-01-28T15:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.849960 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 01:19:43.391599258 +0000 UTC Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.903614 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:17:58 crc kubenswrapper[4871]: E0128 15:17:58.903779 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.917460 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.917524 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.917571 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.917595 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.917613 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.917626 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:58Z","lastTransitionTime":"2026-01-28T15:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.929104 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.941520 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.964628 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.977875 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:58 crc kubenswrapper[4871]: I0128 15:17:58.989875 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.002045 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.015371 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.020492 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.020532 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.020544 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.020567 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.020581 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:59Z","lastTransitionTime":"2026-01-28T15:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.041793 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.074706 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.125045 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.125096 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.125113 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.125134 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.125145 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:59Z","lastTransitionTime":"2026-01-28T15:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.125381 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.161872 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.201249 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.227854 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.227891 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.227900 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.227915 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.227927 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:59Z","lastTransitionTime":"2026-01-28T15:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.239054 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cc566abdc8b82bf93a1bb3f4cd6e3f6df3a4f07dfa695858f55f4cea34fbfbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.273891 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:17:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.331520 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.331633 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.331658 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.331691 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.331722 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:59Z","lastTransitionTime":"2026-01-28T15:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.434632 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.434687 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.434702 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.434725 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.434739 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:59Z","lastTransitionTime":"2026-01-28T15:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.537626 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.537684 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.537701 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.537727 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.537742 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:59Z","lastTransitionTime":"2026-01-28T15:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.641317 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.641373 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.641384 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.641411 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.641423 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:59Z","lastTransitionTime":"2026-01-28T15:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.744121 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.744444 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.744454 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.744470 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.744479 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:59Z","lastTransitionTime":"2026-01-28T15:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.847579 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.847659 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.847676 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.847705 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.847723 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:59Z","lastTransitionTime":"2026-01-28T15:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.850657 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 16:56:13.94066993 +0000 UTC Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.903293 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.903469 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:17:59 crc kubenswrapper[4871]: E0128 15:17:59.903656 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:17:59 crc kubenswrapper[4871]: E0128 15:17:59.903906 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.951178 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.951241 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.951259 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.951290 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:17:59 crc kubenswrapper[4871]: I0128 15:17:59.951331 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:17:59Z","lastTransitionTime":"2026-01-28T15:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.053954 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.054019 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.054037 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.054064 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.054095 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:00Z","lastTransitionTime":"2026-01-28T15:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.158383 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.158451 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.158470 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.158544 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.158565 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:00Z","lastTransitionTime":"2026-01-28T15:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.200266 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovnkube-controller/0.log" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.204954 4871 generic.go:334] "Generic (PLEG): container finished" podID="178343c8-b657-4440-953e-6daef3609145" containerID="8cc566abdc8b82bf93a1bb3f4cd6e3f6df3a4f07dfa695858f55f4cea34fbfbb" exitCode=1 Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.205018 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerDied","Data":"8cc566abdc8b82bf93a1bb3f4cd6e3f6df3a4f07dfa695858f55f4cea34fbfbb"} Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.206277 4871 scope.go:117] "RemoveContainer" containerID="8cc566abdc8b82bf93a1bb3f4cd6e3f6df3a4f07dfa695858f55f4cea34fbfbb" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.226061 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:00Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.248874 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:00Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.263719 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.263776 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.263793 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.263819 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.263838 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:00Z","lastTransitionTime":"2026-01-28T15:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.272711 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:00Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.290891 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:00Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.310889 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:00Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.323882 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:00Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.335686 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:00Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.349233 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:00Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.359968 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:00Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.365937 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.365970 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.365980 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.365996 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.366007 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:00Z","lastTransitionTime":"2026-01-28T15:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.380500 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cc566abdc8b82bf93a1bb3f4cd6e3f6df3a4f07dfa695858f55f4cea34fbfbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cc566abdc8b82bf93a1bb3f4cd6e3f6df3a4f07dfa695858f55f4cea34fbfbb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:17:59Z\\\",\\\"message\\\":\\\" 6183 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:17:59.839873 6183 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:17:59.839886 6183 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:17:59.839921 6183 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 15:17:59.840000 6183 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:17:59.840186 6183 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:17:59.840211 6183 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:17:59.840225 6183 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:17:59.841253 6183 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 15:17:59.841271 6183 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 15:17:59.841301 6183 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:17:59.841315 6183 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:17:59.841321 6183 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:17:59.841328 6183 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:17:59.842345 6183 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:17:59.842558 6183 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:00Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.407021 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:00Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.422149 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:00Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.437858 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:00Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.450184 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:00Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.460497 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:00Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.468279 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.468345 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.468364 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.468388 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.468408 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:00Z","lastTransitionTime":"2026-01-28T15:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.570208 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.570244 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.570255 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.570270 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.570284 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:00Z","lastTransitionTime":"2026-01-28T15:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.674259 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.674328 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.674347 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.674372 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.674390 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:00Z","lastTransitionTime":"2026-01-28T15:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.778016 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.778077 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.778094 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.778117 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.778137 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:00Z","lastTransitionTime":"2026-01-28T15:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.851082 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 12:33:23.670824113 +0000 UTC Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.880752 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.880797 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.880805 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.880835 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.880846 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:00Z","lastTransitionTime":"2026-01-28T15:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.903158 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:00 crc kubenswrapper[4871]: E0128 15:18:00.903300 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.983354 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.983387 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.983395 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.983408 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:00 crc kubenswrapper[4871]: I0128 15:18:00.983418 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:00Z","lastTransitionTime":"2026-01-28T15:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.086081 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.086136 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.086146 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.086159 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.086169 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:01Z","lastTransitionTime":"2026-01-28T15:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.188682 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.188749 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.188767 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.188793 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.188811 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:01Z","lastTransitionTime":"2026-01-28T15:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.211485 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovnkube-controller/0.log" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.215373 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerStarted","Data":"1b6f7eee1e6d2b74498a169f800f46fdc9eb55fb935becd848ea2494e7a9f317"} Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.215927 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.240335 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.255807 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.278188 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.278541 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965"] Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.279168 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.282399 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.282573 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.292633 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.292677 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.292690 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.292714 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.292726 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:01Z","lastTransitionTime":"2026-01-28T15:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.303632 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.319704 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.339472 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6f7eee1e6d2b74498a169f800f46fdc9eb55fb935becd848ea2494e7a9f317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cc566abdc8b82bf93a1bb3f4cd6e3f6df3a4f07dfa695858f55f4cea34fbfbb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:17:59Z\\\",\\\"message\\\":\\\" 6183 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:17:59.839873 6183 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:17:59.839886 6183 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:17:59.839921 6183 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 15:17:59.840000 6183 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:17:59.840186 6183 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:17:59.840211 6183 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:17:59.840225 6183 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:17:59.841253 6183 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 15:17:59.841271 6183 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 15:17:59.841301 6183 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:17:59.841315 6183 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:17:59.841321 6183 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:17:59.841328 6183 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:17:59.842345 6183 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:17:59.842558 6183 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.353174 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.369984 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.384665 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.395448 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.395483 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.395493 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.395506 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.395517 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:01Z","lastTransitionTime":"2026-01-28T15:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.398889 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.417696 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.429905 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.436647 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svgpz\" (UniqueName: \"kubernetes.io/projected/aa280ea6-1d32-4098-be1f-b7314f1a0576-kube-api-access-svgpz\") pod \"ovnkube-control-plane-749d76644c-7t965\" (UID: \"aa280ea6-1d32-4098-be1f-b7314f1a0576\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.436923 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa280ea6-1d32-4098-be1f-b7314f1a0576-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7t965\" (UID: \"aa280ea6-1d32-4098-be1f-b7314f1a0576\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.437099 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa280ea6-1d32-4098-be1f-b7314f1a0576-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7t965\" (UID: \"aa280ea6-1d32-4098-be1f-b7314f1a0576\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.437244 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa280ea6-1d32-4098-be1f-b7314f1a0576-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7t965\" (UID: \"aa280ea6-1d32-4098-be1f-b7314f1a0576\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.441003 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.450021 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.463359 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.474036 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.490138 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.498408 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.498471 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.498484 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.498561 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.498582 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:01Z","lastTransitionTime":"2026-01-28T15:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.506488 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.521619 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.532711 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.538549 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa280ea6-1d32-4098-be1f-b7314f1a0576-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7t965\" (UID: \"aa280ea6-1d32-4098-be1f-b7314f1a0576\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.538712 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa280ea6-1d32-4098-be1f-b7314f1a0576-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7t965\" (UID: \"aa280ea6-1d32-4098-be1f-b7314f1a0576\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.538817 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa280ea6-1d32-4098-be1f-b7314f1a0576-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7t965\" (UID: \"aa280ea6-1d32-4098-be1f-b7314f1a0576\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.538925 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svgpz\" (UniqueName: \"kubernetes.io/projected/aa280ea6-1d32-4098-be1f-b7314f1a0576-kube-api-access-svgpz\") pod \"ovnkube-control-plane-749d76644c-7t965\" (UID: \"aa280ea6-1d32-4098-be1f-b7314f1a0576\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.539375 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa280ea6-1d32-4098-be1f-b7314f1a0576-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7t965\" (UID: \"aa280ea6-1d32-4098-be1f-b7314f1a0576\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.539752 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa280ea6-1d32-4098-be1f-b7314f1a0576-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7t965\" (UID: \"aa280ea6-1d32-4098-be1f-b7314f1a0576\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.544177 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.545863 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa280ea6-1d32-4098-be1f-b7314f1a0576-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7t965\" (UID: \"aa280ea6-1d32-4098-be1f-b7314f1a0576\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.555848 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svgpz\" (UniqueName: \"kubernetes.io/projected/aa280ea6-1d32-4098-be1f-b7314f1a0576-kube-api-access-svgpz\") pod \"ovnkube-control-plane-749d76644c-7t965\" (UID: \"aa280ea6-1d32-4098-be1f-b7314f1a0576\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.559869 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.575315 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa280ea6-1d32-4098-be1f-b7314f1a0576\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7t965\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.585416 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.600780 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.600826 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.600838 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.600858 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.600871 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:01Z","lastTransitionTime":"2026-01-28T15:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.603522 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.603938 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: W0128 15:18:01.617119 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa280ea6_1d32_4098_be1f_b7314f1a0576.slice/crio-dd83c5304396a79cc1dd0c86362a975bcc525e284023db7a9e3af18947119bd3 WatchSource:0}: Error finding container dd83c5304396a79cc1dd0c86362a975bcc525e284023db7a9e3af18947119bd3: Status 404 returned error can't find the container with id dd83c5304396a79cc1dd0c86362a975bcc525e284023db7a9e3af18947119bd3 Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.624813 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.645269 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.675215 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6f7eee1e6d2b74498a169f800f46fdc9eb55fb935becd848ea2494e7a9f317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cc566abdc8b82bf93a1bb3f4cd6e3f6df3a4f07dfa695858f55f4cea34fbfbb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:17:59Z\\\",\\\"message\\\":\\\" 6183 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:17:59.839873 6183 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:17:59.839886 6183 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:17:59.839921 6183 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 15:17:59.840000 6183 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:17:59.840186 6183 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:17:59.840211 6183 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:17:59.840225 6183 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:17:59.841253 6183 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 15:17:59.841271 6183 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 15:17:59.841301 6183 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:17:59.841315 6183 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:17:59.841321 6183 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:17:59.841328 6183 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:17:59.842345 6183 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:17:59.842558 6183 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.705124 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.705157 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.705166 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.705181 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.705190 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:01Z","lastTransitionTime":"2026-01-28T15:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.711719 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.728122 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.747674 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.807424 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.807457 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.807466 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.807480 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.807491 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:01Z","lastTransitionTime":"2026-01-28T15:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.852084 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 15:33:03.894498369 +0000 UTC Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.903624 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.903662 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:01 crc kubenswrapper[4871]: E0128 15:18:01.903930 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:01 crc kubenswrapper[4871]: E0128 15:18:01.904042 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.911643 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.911702 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.911715 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.911738 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:01 crc kubenswrapper[4871]: I0128 15:18:01.911753 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:01Z","lastTransitionTime":"2026-01-28T15:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.015002 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.015070 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.015087 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.015111 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.015127 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:02Z","lastTransitionTime":"2026-01-28T15:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.118386 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.118469 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.118496 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.118531 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.118563 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:02Z","lastTransitionTime":"2026-01-28T15:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.222471 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.223106 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.223130 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.223162 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.223183 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:02Z","lastTransitionTime":"2026-01-28T15:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.223703 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovnkube-controller/1.log" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.224654 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovnkube-controller/0.log" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.228663 4871 generic.go:334] "Generic (PLEG): container finished" podID="178343c8-b657-4440-953e-6daef3609145" containerID="1b6f7eee1e6d2b74498a169f800f46fdc9eb55fb935becd848ea2494e7a9f317" exitCode=1 Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.228774 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerDied","Data":"1b6f7eee1e6d2b74498a169f800f46fdc9eb55fb935becd848ea2494e7a9f317"} Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.228846 4871 scope.go:117] "RemoveContainer" containerID="8cc566abdc8b82bf93a1bb3f4cd6e3f6df3a4f07dfa695858f55f4cea34fbfbb" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.231875 4871 scope.go:117] "RemoveContainer" containerID="1b6f7eee1e6d2b74498a169f800f46fdc9eb55fb935becd848ea2494e7a9f317" Jan 28 15:18:02 crc kubenswrapper[4871]: E0128 15:18:02.232207 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fn5bb_openshift-ovn-kubernetes(178343c8-b657-4440-953e-6daef3609145)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" podUID="178343c8-b657-4440-953e-6daef3609145" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.233172 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" event={"ID":"aa280ea6-1d32-4098-be1f-b7314f1a0576","Type":"ContainerStarted","Data":"02cfcf58f96119fd03a37cc47985518ce33dd48984718cc2968b6ebb43bd6913"} Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.233203 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" event={"ID":"aa280ea6-1d32-4098-be1f-b7314f1a0576","Type":"ContainerStarted","Data":"dd83c5304396a79cc1dd0c86362a975bcc525e284023db7a9e3af18947119bd3"} Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.254488 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.270272 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.294410 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.317233 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.327033 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.327086 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.327099 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.327118 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.327133 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:02Z","lastTransitionTime":"2026-01-28T15:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.334281 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.368058 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6f7eee1e6d2b74498a169f800f46fdc9eb55fb935becd848ea2494e7a9f317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cc566abdc8b82bf93a1bb3f4cd6e3f6df3a4f07dfa695858f55f4cea34fbfbb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:17:59Z\\\",\\\"message\\\":\\\" 6183 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:17:59.839873 6183 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:17:59.839886 6183 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:17:59.839921 6183 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 15:17:59.840000 6183 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:17:59.840186 6183 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:17:59.840211 6183 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:17:59.840225 6183 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:17:59.841253 6183 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 15:17:59.841271 6183 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 15:17:59.841301 6183 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:17:59.841315 6183 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:17:59.841321 6183 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:17:59.841328 6183 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:17:59.842345 6183 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:17:59.842558 6183 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b6f7eee1e6d2b74498a169f800f46fdc9eb55fb935becd848ea2494e7a9f317\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"message\\\":\\\"ing reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:18:01.356733 6311 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:18:01.356819 6311 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 15:18:01.356829 6311 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 15:18:01.356850 6311 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:18:01.356869 6311 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:18:01.356879 6311 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:18:01.356905 6311 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:18:01.356905 6311 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:18:01.356923 6311 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:18:01.356937 6311 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:18:01.356942 6311 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:18:01.356947 6311 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:18:01.356961 6311 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:18:01.356968 6311 factory.go:656] Stopping watch factory\\\\nI0128 15:18:01.357025 6311 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15:18:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.388545 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.406403 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.426985 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.430112 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.430149 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.430158 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.430176 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.430186 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:02Z","lastTransitionTime":"2026-01-28T15:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.446271 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.476240 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.496551 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.522182 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.533675 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.533742 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.533761 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.533799 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.533819 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:02Z","lastTransitionTime":"2026-01-28T15:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.541748 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.568204 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.584174 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa280ea6-1d32-4098-be1f-b7314f1a0576\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7t965\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.636914 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.637244 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.637317 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.637403 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.637470 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:02Z","lastTransitionTime":"2026-01-28T15:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.741092 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.741183 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.741204 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.741233 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.741253 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:02Z","lastTransitionTime":"2026-01-28T15:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.844994 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.845480 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.845662 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.845810 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.846138 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:02Z","lastTransitionTime":"2026-01-28T15:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.852536 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 21:43:30.321610811 +0000 UTC Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.903525 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:02 crc kubenswrapper[4871]: E0128 15:18:02.903768 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.949637 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.949712 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.949739 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.949773 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:02 crc kubenswrapper[4871]: I0128 15:18:02.949805 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:02Z","lastTransitionTime":"2026-01-28T15:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.053780 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.053852 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.053870 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.053899 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.053920 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:03Z","lastTransitionTime":"2026-01-28T15:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.156764 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.156822 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.156839 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.156862 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.156878 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:03Z","lastTransitionTime":"2026-01-28T15:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.233891 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-jp46k"] Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.235109 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:03 crc kubenswrapper[4871]: E0128 15:18:03.235252 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.240201 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" event={"ID":"aa280ea6-1d32-4098-be1f-b7314f1a0576","Type":"ContainerStarted","Data":"9bd7df8770de3b5fe13ad0ac02c6a85f864af968e4ef04084b30f7f566736090"} Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.242862 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovnkube-controller/1.log" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.248435 4871 scope.go:117] "RemoveContainer" containerID="1b6f7eee1e6d2b74498a169f800f46fdc9eb55fb935becd848ea2494e7a9f317" Jan 28 15:18:03 crc kubenswrapper[4871]: E0128 15:18:03.248721 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fn5bb_openshift-ovn-kubernetes(178343c8-b657-4440-953e-6daef3609145)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" podUID="178343c8-b657-4440-953e-6daef3609145" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.258915 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.259453 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.259515 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.259535 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.259561 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.259580 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:03Z","lastTransitionTime":"2026-01-28T15:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.276340 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.293505 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.321257 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6f7eee1e6d2b74498a169f800f46fdc9eb55fb935becd848ea2494e7a9f317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cc566abdc8b82bf93a1bb3f4cd6e3f6df3a4f07dfa695858f55f4cea34fbfbb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:17:59Z\\\",\\\"message\\\":\\\" 6183 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:17:59.839873 6183 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:17:59.839886 6183 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:17:59.839921 6183 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 15:17:59.840000 6183 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:17:59.840186 6183 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:17:59.840211 6183 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:17:59.840225 6183 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:17:59.841253 6183 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 15:17:59.841271 6183 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 15:17:59.841301 6183 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:17:59.841315 6183 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:17:59.841321 6183 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:17:59.841328 6183 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:17:59.842345 6183 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:17:59.842558 6183 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b6f7eee1e6d2b74498a169f800f46fdc9eb55fb935becd848ea2494e7a9f317\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"message\\\":\\\"ing reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:18:01.356733 6311 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:18:01.356819 6311 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 15:18:01.356829 6311 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 15:18:01.356850 6311 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:18:01.356869 6311 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:18:01.356879 6311 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:18:01.356905 6311 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:18:01.356905 6311 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:18:01.356923 6311 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:18:01.356937 6311 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:18:01.356942 6311 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:18:01.356947 6311 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:18:01.356961 6311 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:18:01.356968 6311 factory.go:656] Stopping watch factory\\\\nI0128 15:18:01.357025 6311 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15:18:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.348258 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.359700 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs\") pod \"network-metrics-daemon-jp46k\" (UID: \"64aa044d-1eb6-4e5f-9c12-96ba346374fa\") " pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.359795 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrnj6\" (UniqueName: \"kubernetes.io/projected/64aa044d-1eb6-4e5f-9c12-96ba346374fa-kube-api-access-vrnj6\") pod \"network-metrics-daemon-jp46k\" (UID: \"64aa044d-1eb6-4e5f-9c12-96ba346374fa\") " pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.364233 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.364300 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.364321 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.364348 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.364377 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:03Z","lastTransitionTime":"2026-01-28T15:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.373237 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.393821 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.409139 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.425200 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.441838 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.460500 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs\") pod \"network-metrics-daemon-jp46k\" (UID: \"64aa044d-1eb6-4e5f-9c12-96ba346374fa\") " pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.460575 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrnj6\" (UniqueName: \"kubernetes.io/projected/64aa044d-1eb6-4e5f-9c12-96ba346374fa-kube-api-access-vrnj6\") pod \"network-metrics-daemon-jp46k\" (UID: \"64aa044d-1eb6-4e5f-9c12-96ba346374fa\") " pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:03 crc kubenswrapper[4871]: E0128 15:18:03.460813 4871 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:18:03 crc kubenswrapper[4871]: E0128 15:18:03.460949 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs podName:64aa044d-1eb6-4e5f-9c12-96ba346374fa nodeName:}" failed. No retries permitted until 2026-01-28 15:18:03.960920437 +0000 UTC m=+35.856758759 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs") pod "network-metrics-daemon-jp46k" (UID: "64aa044d-1eb6-4e5f-9c12-96ba346374fa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.461622 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.467389 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.467441 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.467455 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.467476 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.467490 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:03Z","lastTransitionTime":"2026-01-28T15:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.480102 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrnj6\" (UniqueName: \"kubernetes.io/projected/64aa044d-1eb6-4e5f-9c12-96ba346374fa-kube-api-access-vrnj6\") pod \"network-metrics-daemon-jp46k\" (UID: \"64aa044d-1eb6-4e5f-9c12-96ba346374fa\") " pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.481801 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa280ea6-1d32-4098-be1f-b7314f1a0576\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7t965\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.505767 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.530694 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.556870 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.570890 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.570946 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.570965 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.570992 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.571010 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:03Z","lastTransitionTime":"2026-01-28T15:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.574522 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.591024 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jp46k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64aa044d-1eb6-4e5f-9c12-96ba346374fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jp46k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.626023 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.646462 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.664715 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.673851 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.673910 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.673927 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.673951 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.673968 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:03Z","lastTransitionTime":"2026-01-28T15:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.682173 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.702021 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.714803 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.732147 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.749365 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa280ea6-1d32-4098-be1f-b7314f1a0576\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cfcf58f96119fd03a37cc47985518ce33dd48984718cc2968b6ebb43bd6913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd7df8770de3b5fe13ad0ac02c6a85f864af968e4ef04084b30f7f566736090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7t965\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.770227 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.776828 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.776880 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.776899 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.776923 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.776941 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:03Z","lastTransitionTime":"2026-01-28T15:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.788691 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.813697 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.831705 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.852569 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jp46k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64aa044d-1eb6-4e5f-9c12-96ba346374fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jp46k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.853254 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 06:47:24.515822039 +0000 UTC Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.873910 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.879991 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.880210 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.880284 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.880408 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.880494 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:03Z","lastTransitionTime":"2026-01-28T15:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.893415 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.903258 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.903267 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:03 crc kubenswrapper[4871]: E0128 15:18:03.903518 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:03 crc kubenswrapper[4871]: E0128 15:18:03.903677 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.911539 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.942269 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6f7eee1e6d2b74498a169f800f46fdc9eb55fb935becd848ea2494e7a9f317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b6f7eee1e6d2b74498a169f800f46fdc9eb55fb935becd848ea2494e7a9f317\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"message\\\":\\\"ing reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:18:01.356733 6311 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:18:01.356819 6311 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 15:18:01.356829 6311 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 15:18:01.356850 6311 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:18:01.356869 6311 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:18:01.356879 6311 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:18:01.356905 6311 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:18:01.356905 6311 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:18:01.356923 6311 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:18:01.356937 6311 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:18:01.356942 6311 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:18:01.356947 6311 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:18:01.356961 6311 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:18:01.356968 6311 factory.go:656] Stopping watch factory\\\\nI0128 15:18:01.357025 6311 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15:18:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fn5bb_openshift-ovn-kubernetes(178343c8-b657-4440-953e-6daef3609145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.966136 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs\") pod \"network-metrics-daemon-jp46k\" (UID: \"64aa044d-1eb6-4e5f-9c12-96ba346374fa\") " pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:03 crc kubenswrapper[4871]: E0128 15:18:03.966407 4871 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:18:03 crc kubenswrapper[4871]: E0128 15:18:03.966537 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs podName:64aa044d-1eb6-4e5f-9c12-96ba346374fa nodeName:}" failed. No retries permitted until 2026-01-28 15:18:04.966509858 +0000 UTC m=+36.862348180 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs") pod "network-metrics-daemon-jp46k" (UID: "64aa044d-1eb6-4e5f-9c12-96ba346374fa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.983547 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.983644 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.983667 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.983696 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:03 crc kubenswrapper[4871]: I0128 15:18:03.983715 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:03Z","lastTransitionTime":"2026-01-28T15:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.067120 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.067243 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.067305 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.067345 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:04 crc kubenswrapper[4871]: E0128 15:18:04.067438 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:18:20.067397555 +0000 UTC m=+51.963235917 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:18:04 crc kubenswrapper[4871]: E0128 15:18:04.067498 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:18:04 crc kubenswrapper[4871]: E0128 15:18:04.067523 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:18:04 crc kubenswrapper[4871]: E0128 15:18:04.067542 4871 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.067631 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:04 crc kubenswrapper[4871]: E0128 15:18:04.067651 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:18:20.067625852 +0000 UTC m=+51.963464214 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:18:04 crc kubenswrapper[4871]: E0128 15:18:04.067705 4871 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:18:04 crc kubenswrapper[4871]: E0128 15:18:04.067788 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:18:20.067768676 +0000 UTC m=+51.963607038 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:18:04 crc kubenswrapper[4871]: E0128 15:18:04.067797 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:18:04 crc kubenswrapper[4871]: E0128 15:18:04.067823 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:18:04 crc kubenswrapper[4871]: E0128 15:18:04.067843 4871 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:18:04 crc kubenswrapper[4871]: E0128 15:18:04.067912 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:18:20.06789284 +0000 UTC m=+51.963731192 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:18:04 crc kubenswrapper[4871]: E0128 15:18:04.067974 4871 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:18:04 crc kubenswrapper[4871]: E0128 15:18:04.068214 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:18:20.068116167 +0000 UTC m=+51.963954719 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.087098 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.087154 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.087177 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.087205 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.087224 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:04Z","lastTransitionTime":"2026-01-28T15:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.196241 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.196324 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.196346 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.196377 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.196399 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:04Z","lastTransitionTime":"2026-01-28T15:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.300131 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.300189 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.300291 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.300319 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.300346 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:04Z","lastTransitionTime":"2026-01-28T15:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.403427 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.403750 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.403960 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.404137 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.404287 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:04Z","lastTransitionTime":"2026-01-28T15:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.507649 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.507929 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.508083 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.508209 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.508462 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:04Z","lastTransitionTime":"2026-01-28T15:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.612650 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.612717 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.612736 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.612763 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.612784 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:04Z","lastTransitionTime":"2026-01-28T15:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.717212 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.717280 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.717300 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.717332 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.717352 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:04Z","lastTransitionTime":"2026-01-28T15:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.820755 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.820822 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.820841 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.820866 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.820886 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:04Z","lastTransitionTime":"2026-01-28T15:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.854244 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 04:35:20.000302671 +0000 UTC Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.898639 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.898687 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.898702 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.898723 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.898740 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:04Z","lastTransitionTime":"2026-01-28T15:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.903867 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.903919 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:04 crc kubenswrapper[4871]: E0128 15:18:04.904074 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:04 crc kubenswrapper[4871]: E0128 15:18:04.904264 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:04 crc kubenswrapper[4871]: E0128 15:18:04.922748 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.927748 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.927789 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.927802 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.927822 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.927839 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:04Z","lastTransitionTime":"2026-01-28T15:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:04 crc kubenswrapper[4871]: E0128 15:18:04.947770 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.953888 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.953956 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.953973 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.953998 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.954016 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:04Z","lastTransitionTime":"2026-01-28T15:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:04 crc kubenswrapper[4871]: E0128 15:18:04.974909 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.978488 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs\") pod \"network-metrics-daemon-jp46k\" (UID: \"64aa044d-1eb6-4e5f-9c12-96ba346374fa\") " pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:04 crc kubenswrapper[4871]: E0128 15:18:04.978665 4871 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:18:04 crc kubenswrapper[4871]: E0128 15:18:04.978740 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs podName:64aa044d-1eb6-4e5f-9c12-96ba346374fa nodeName:}" failed. No retries permitted until 2026-01-28 15:18:06.978720243 +0000 UTC m=+38.874558575 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs") pod "network-metrics-daemon-jp46k" (UID: "64aa044d-1eb6-4e5f-9c12-96ba346374fa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.979493 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.979534 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.979547 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.979568 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:04 crc kubenswrapper[4871]: I0128 15:18:04.979608 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:04Z","lastTransitionTime":"2026-01-28T15:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:04 crc kubenswrapper[4871]: E0128 15:18:04.998869 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.004468 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.004694 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.004868 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.005012 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.005144 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:05Z","lastTransitionTime":"2026-01-28T15:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:05 crc kubenswrapper[4871]: E0128 15:18:05.026843 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:05 crc kubenswrapper[4871]: E0128 15:18:05.027463 4871 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.029944 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.030200 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.030365 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.030512 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.030726 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:05Z","lastTransitionTime":"2026-01-28T15:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.134779 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.134859 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.134878 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.134934 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.134955 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:05Z","lastTransitionTime":"2026-01-28T15:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.238058 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.238098 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.238105 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.238118 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.238126 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:05Z","lastTransitionTime":"2026-01-28T15:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.341546 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.341648 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.341668 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.341689 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.341705 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:05Z","lastTransitionTime":"2026-01-28T15:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.445187 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.445259 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.445278 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.445308 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.445328 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:05Z","lastTransitionTime":"2026-01-28T15:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.548493 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.548563 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.548613 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.548646 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.548666 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:05Z","lastTransitionTime":"2026-01-28T15:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.652625 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.652700 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.652719 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.652747 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.652766 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:05Z","lastTransitionTime":"2026-01-28T15:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.756685 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.757059 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.757259 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.757415 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.757559 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:05Z","lastTransitionTime":"2026-01-28T15:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.855407 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 03:39:32.972959435 +0000 UTC Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.860853 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.860949 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.860972 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.861002 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.861028 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:05Z","lastTransitionTime":"2026-01-28T15:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.903742 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.903803 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:05 crc kubenswrapper[4871]: E0128 15:18:05.903938 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:05 crc kubenswrapper[4871]: E0128 15:18:05.904083 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.965216 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.965269 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.965282 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.965303 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:05 crc kubenswrapper[4871]: I0128 15:18:05.965315 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:05Z","lastTransitionTime":"2026-01-28T15:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.068800 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.068879 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.068906 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.068939 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.068962 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:06Z","lastTransitionTime":"2026-01-28T15:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.172223 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.172291 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.172309 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.172335 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.172353 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:06Z","lastTransitionTime":"2026-01-28T15:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.275090 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.275147 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.275159 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.275179 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.275195 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:06Z","lastTransitionTime":"2026-01-28T15:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.379115 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.379183 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.379203 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.379234 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.379257 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:06Z","lastTransitionTime":"2026-01-28T15:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.483210 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.483273 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.483293 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.483320 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.483338 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:06Z","lastTransitionTime":"2026-01-28T15:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.586063 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.586105 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.586115 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.586131 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.586141 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:06Z","lastTransitionTime":"2026-01-28T15:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.689980 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.690029 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.690061 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.690080 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.690090 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:06Z","lastTransitionTime":"2026-01-28T15:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.793711 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.793761 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.793772 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.793791 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.793805 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:06Z","lastTransitionTime":"2026-01-28T15:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.856019 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 03:43:58.805284184 +0000 UTC Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.897487 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.897570 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.897626 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.897668 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.897691 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:06Z","lastTransitionTime":"2026-01-28T15:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.903295 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.903441 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:06 crc kubenswrapper[4871]: E0128 15:18:06.903893 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:06 crc kubenswrapper[4871]: E0128 15:18:06.904105 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:06 crc kubenswrapper[4871]: I0128 15:18:06.904369 4871 scope.go:117] "RemoveContainer" containerID="e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.000447 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs\") pod \"network-metrics-daemon-jp46k\" (UID: \"64aa044d-1eb6-4e5f-9c12-96ba346374fa\") " pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:07 crc kubenswrapper[4871]: E0128 15:18:07.000712 4871 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:18:07 crc kubenswrapper[4871]: E0128 15:18:07.000796 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs podName:64aa044d-1eb6-4e5f-9c12-96ba346374fa nodeName:}" failed. No retries permitted until 2026-01-28 15:18:11.000774799 +0000 UTC m=+42.896613151 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs") pod "network-metrics-daemon-jp46k" (UID: "64aa044d-1eb6-4e5f-9c12-96ba346374fa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.002150 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.002202 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.002221 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.002246 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.002263 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:07Z","lastTransitionTime":"2026-01-28T15:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.104933 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.104982 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.104994 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.105014 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.105025 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:07Z","lastTransitionTime":"2026-01-28T15:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.207628 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.207673 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.207687 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.207705 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.207717 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:07Z","lastTransitionTime":"2026-01-28T15:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.269550 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.272219 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b"} Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.272822 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.289113 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.303828 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.311056 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.311100 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.311117 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.311142 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.311157 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:07Z","lastTransitionTime":"2026-01-28T15:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.318326 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.341289 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.358458 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.374269 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa280ea6-1d32-4098-be1f-b7314f1a0576\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cfcf58f96119fd03a37cc47985518ce33dd48984718cc2968b6ebb43bd6913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd7df8770de3b5fe13ad0ac02c6a85f864af968e4ef04084b30f7f566736090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7t965\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.391573 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.408421 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.414371 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.414395 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.414403 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.414420 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.414432 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:07Z","lastTransitionTime":"2026-01-28T15:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.423987 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.441980 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.455194 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.469025 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jp46k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64aa044d-1eb6-4e5f-9c12-96ba346374fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jp46k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.492605 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.514452 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.516691 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.516761 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.516780 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.516806 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.516824 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:07Z","lastTransitionTime":"2026-01-28T15:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.532908 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.567842 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6f7eee1e6d2b74498a169f800f46fdc9eb55fb935becd848ea2494e7a9f317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b6f7eee1e6d2b74498a169f800f46fdc9eb55fb935becd848ea2494e7a9f317\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"message\\\":\\\"ing reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:18:01.356733 6311 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:18:01.356819 6311 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 15:18:01.356829 6311 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 15:18:01.356850 6311 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:18:01.356869 6311 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:18:01.356879 6311 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:18:01.356905 6311 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:18:01.356905 6311 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:18:01.356923 6311 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:18:01.356937 6311 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:18:01.356942 6311 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:18:01.356947 6311 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:18:01.356961 6311 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:18:01.356968 6311 factory.go:656] Stopping watch factory\\\\nI0128 15:18:01.357025 6311 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15:18:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fn5bb_openshift-ovn-kubernetes(178343c8-b657-4440-953e-6daef3609145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.585093 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.619940 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.619994 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.620012 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.620037 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.620055 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:07Z","lastTransitionTime":"2026-01-28T15:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.723316 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.723383 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.723402 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.723428 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.723448 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:07Z","lastTransitionTime":"2026-01-28T15:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.826273 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.826341 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.826358 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.826383 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.826401 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:07Z","lastTransitionTime":"2026-01-28T15:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.856839 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 07:04:11.785982207 +0000 UTC Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.903357 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.903360 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:07 crc kubenswrapper[4871]: E0128 15:18:07.903533 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:07 crc kubenswrapper[4871]: E0128 15:18:07.903763 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.929349 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.929497 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.929524 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.929552 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:07 crc kubenswrapper[4871]: I0128 15:18:07.929575 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:07Z","lastTransitionTime":"2026-01-28T15:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.032730 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.032812 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.032837 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.032869 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.032894 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:08Z","lastTransitionTime":"2026-01-28T15:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.135668 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.135747 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.135773 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.135804 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.135829 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:08Z","lastTransitionTime":"2026-01-28T15:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.238926 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.238995 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.239017 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.239048 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.239073 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:08Z","lastTransitionTime":"2026-01-28T15:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.342136 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.342197 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.342213 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.342236 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.342251 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:08Z","lastTransitionTime":"2026-01-28T15:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.445408 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.445703 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.445717 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.445734 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.445743 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:08Z","lastTransitionTime":"2026-01-28T15:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.548551 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.548668 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.548686 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.548711 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.548727 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:08Z","lastTransitionTime":"2026-01-28T15:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.652532 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.652632 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.652654 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.652680 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.652706 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:08Z","lastTransitionTime":"2026-01-28T15:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.755963 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.756005 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.756025 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.756041 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.756053 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:08Z","lastTransitionTime":"2026-01-28T15:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.857244 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 21:25:22.403090978 +0000 UTC Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.859032 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.859079 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.859097 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.859120 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.859139 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:08Z","lastTransitionTime":"2026-01-28T15:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.903093 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.903281 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:08 crc kubenswrapper[4871]: E0128 15:18:08.903440 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:08 crc kubenswrapper[4871]: E0128 15:18:08.903895 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.926741 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.954525 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.962188 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.962235 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.962253 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.962277 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.962294 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:08Z","lastTransitionTime":"2026-01-28T15:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.971209 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:08 crc kubenswrapper[4871]: I0128 15:18:08.993381 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jp46k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64aa044d-1eb6-4e5f-9c12-96ba346374fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jp46k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.018318 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.037093 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.053765 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.066026 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.066066 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.066081 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.066102 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.066116 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:09Z","lastTransitionTime":"2026-01-28T15:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.083853 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b6f7eee1e6d2b74498a169f800f46fdc9eb55fb935becd848ea2494e7a9f317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b6f7eee1e6d2b74498a169f800f46fdc9eb55fb935becd848ea2494e7a9f317\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"message\\\":\\\"ing reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:18:01.356733 6311 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:18:01.356819 6311 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 15:18:01.356829 6311 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 15:18:01.356850 6311 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:18:01.356869 6311 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:18:01.356879 6311 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:18:01.356905 6311 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:18:01.356905 6311 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:18:01.356923 6311 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:18:01.356937 6311 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:18:01.356942 6311 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:18:01.356947 6311 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:18:01.356961 6311 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:18:01.356968 6311 factory.go:656] Stopping watch factory\\\\nI0128 15:18:01.357025 6311 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15:18:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fn5bb_openshift-ovn-kubernetes(178343c8-b657-4440-953e-6daef3609145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.103026 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.122230 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.138150 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.155205 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.168975 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.169009 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.169021 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.169036 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.169049 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:09Z","lastTransitionTime":"2026-01-28T15:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.189847 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.211038 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.225669 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa280ea6-1d32-4098-be1f-b7314f1a0576\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cfcf58f96119fd03a37cc47985518ce33dd48984718cc2968b6ebb43bd6913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd7df8770de3b5fe13ad0ac02c6a85f864af968e4ef04084b30f7f566736090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7t965\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.242942 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.260534 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.271763 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.271818 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.271835 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.271860 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.271877 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:09Z","lastTransitionTime":"2026-01-28T15:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.375404 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.375456 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.375468 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.375486 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.375498 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:09Z","lastTransitionTime":"2026-01-28T15:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.478912 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.478979 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.478999 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.479025 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.479045 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:09Z","lastTransitionTime":"2026-01-28T15:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.582706 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.582798 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.582820 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.582847 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.582866 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:09Z","lastTransitionTime":"2026-01-28T15:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.686548 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.686657 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.686676 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.686701 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.686721 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:09Z","lastTransitionTime":"2026-01-28T15:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.790793 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.790880 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.790914 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.790944 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.790965 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:09Z","lastTransitionTime":"2026-01-28T15:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.857782 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 17:20:39.852229624 +0000 UTC Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.893896 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.893962 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.893979 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.894002 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.894019 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:09Z","lastTransitionTime":"2026-01-28T15:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.903258 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.903346 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:09 crc kubenswrapper[4871]: E0128 15:18:09.903383 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:09 crc kubenswrapper[4871]: E0128 15:18:09.903572 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.996985 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.997062 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.997098 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.997162 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:09 crc kubenswrapper[4871]: I0128 15:18:09.997188 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:09Z","lastTransitionTime":"2026-01-28T15:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.100549 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.100643 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.100706 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.100736 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.100755 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:10Z","lastTransitionTime":"2026-01-28T15:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.204235 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.204316 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.204342 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.204372 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.204395 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:10Z","lastTransitionTime":"2026-01-28T15:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.307724 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.307795 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.307818 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.307862 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.307887 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:10Z","lastTransitionTime":"2026-01-28T15:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.410771 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.410890 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.410917 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.410946 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.410968 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:10Z","lastTransitionTime":"2026-01-28T15:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.513698 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.513769 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.513788 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.513813 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.513830 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:10Z","lastTransitionTime":"2026-01-28T15:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.616334 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.616399 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.616415 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.616436 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.616448 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:10Z","lastTransitionTime":"2026-01-28T15:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.719969 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.720025 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.720042 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.720070 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.720088 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:10Z","lastTransitionTime":"2026-01-28T15:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.823947 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.824021 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.824045 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.824070 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.824088 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:10Z","lastTransitionTime":"2026-01-28T15:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.858942 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 06:55:58.005411555 +0000 UTC Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.903941 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.904059 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:10 crc kubenswrapper[4871]: E0128 15:18:10.904180 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:10 crc kubenswrapper[4871]: E0128 15:18:10.904498 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.927278 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.927371 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.927394 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.927422 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:10 crc kubenswrapper[4871]: I0128 15:18:10.927440 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:10Z","lastTransitionTime":"2026-01-28T15:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.031115 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.031171 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.031189 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.031214 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.031297 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:11Z","lastTransitionTime":"2026-01-28T15:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.038463 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs\") pod \"network-metrics-daemon-jp46k\" (UID: \"64aa044d-1eb6-4e5f-9c12-96ba346374fa\") " pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:11 crc kubenswrapper[4871]: E0128 15:18:11.038720 4871 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:18:11 crc kubenswrapper[4871]: E0128 15:18:11.038834 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs podName:64aa044d-1eb6-4e5f-9c12-96ba346374fa nodeName:}" failed. No retries permitted until 2026-01-28 15:18:19.038801715 +0000 UTC m=+50.934640077 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs") pod "network-metrics-daemon-jp46k" (UID: "64aa044d-1eb6-4e5f-9c12-96ba346374fa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.133892 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.133992 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.134016 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.134045 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.134067 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:11Z","lastTransitionTime":"2026-01-28T15:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.237714 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.237768 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.237784 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.237808 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.237824 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:11Z","lastTransitionTime":"2026-01-28T15:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.340754 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.340827 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.340851 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.340880 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.340902 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:11Z","lastTransitionTime":"2026-01-28T15:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.444397 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.444490 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.444516 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.444546 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.444569 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:11Z","lastTransitionTime":"2026-01-28T15:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.547381 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.547460 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.547486 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.547519 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.547542 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:11Z","lastTransitionTime":"2026-01-28T15:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.650170 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.650242 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.650265 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.650297 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.650317 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:11Z","lastTransitionTime":"2026-01-28T15:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.754483 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.754522 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.754531 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.754551 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.754562 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:11Z","lastTransitionTime":"2026-01-28T15:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.857369 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.857422 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.857435 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.857456 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.857471 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:11Z","lastTransitionTime":"2026-01-28T15:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.859773 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 10:48:11.750958466 +0000 UTC Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.903388 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.903460 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:11 crc kubenswrapper[4871]: E0128 15:18:11.903552 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:11 crc kubenswrapper[4871]: E0128 15:18:11.903695 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.960111 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.960177 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.960201 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.960230 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:11 crc kubenswrapper[4871]: I0128 15:18:11.960251 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:11Z","lastTransitionTime":"2026-01-28T15:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.064169 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.064253 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.064278 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.064309 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.064331 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:12Z","lastTransitionTime":"2026-01-28T15:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.168063 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.168138 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.168160 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.168183 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.168200 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:12Z","lastTransitionTime":"2026-01-28T15:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.270783 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.270830 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.270842 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.270859 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.270872 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:12Z","lastTransitionTime":"2026-01-28T15:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.373809 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.373844 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.373860 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.373882 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.373893 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:12Z","lastTransitionTime":"2026-01-28T15:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.476909 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.476956 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.476972 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.476995 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.477012 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:12Z","lastTransitionTime":"2026-01-28T15:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.579747 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.579804 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.579820 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.579844 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.579864 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:12Z","lastTransitionTime":"2026-01-28T15:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.683294 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.683358 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.683389 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.683420 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.683443 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:12Z","lastTransitionTime":"2026-01-28T15:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.787005 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.787076 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.787100 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.787132 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.787158 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:12Z","lastTransitionTime":"2026-01-28T15:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.860343 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 23:01:05.378371627 +0000 UTC Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.890158 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.890235 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.890258 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.890291 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.890317 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:12Z","lastTransitionTime":"2026-01-28T15:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.903677 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.903807 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:12 crc kubenswrapper[4871]: E0128 15:18:12.903878 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:12 crc kubenswrapper[4871]: E0128 15:18:12.904030 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.993713 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.993776 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.993793 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.993818 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:12 crc kubenswrapper[4871]: I0128 15:18:12.993836 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:12Z","lastTransitionTime":"2026-01-28T15:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.096441 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.096764 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.096853 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.096932 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.097002 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:13Z","lastTransitionTime":"2026-01-28T15:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.200119 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.200775 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.200866 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.200933 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.201005 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:13Z","lastTransitionTime":"2026-01-28T15:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.303911 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.303990 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.304014 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.304044 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.304066 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:13Z","lastTransitionTime":"2026-01-28T15:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.407268 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.407319 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.407345 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.407370 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.407385 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:13Z","lastTransitionTime":"2026-01-28T15:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.510562 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.510675 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.510709 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.510740 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.510763 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:13Z","lastTransitionTime":"2026-01-28T15:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.613194 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.613787 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.613945 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.614201 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.614428 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:13Z","lastTransitionTime":"2026-01-28T15:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.718055 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.718130 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.718155 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.718184 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.718207 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:13Z","lastTransitionTime":"2026-01-28T15:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.821488 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.821561 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.821632 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.821666 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.821690 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:13Z","lastTransitionTime":"2026-01-28T15:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.861534 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 11:25:20.688583269 +0000 UTC Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.903383 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.903468 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:13 crc kubenswrapper[4871]: E0128 15:18:13.903676 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:13 crc kubenswrapper[4871]: E0128 15:18:13.904365 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.904847 4871 scope.go:117] "RemoveContainer" containerID="1b6f7eee1e6d2b74498a169f800f46fdc9eb55fb935becd848ea2494e7a9f317" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.924509 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.924812 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.924963 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.925139 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:13 crc kubenswrapper[4871]: I0128 15:18:13.925292 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:13Z","lastTransitionTime":"2026-01-28T15:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.029684 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.030076 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.030236 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.030474 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.030753 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:14Z","lastTransitionTime":"2026-01-28T15:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.134868 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.134920 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.134947 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.135002 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.135047 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:14Z","lastTransitionTime":"2026-01-28T15:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.238462 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.238494 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.238502 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.238519 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.238529 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:14Z","lastTransitionTime":"2026-01-28T15:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.301856 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovnkube-controller/1.log" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.307207 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerStarted","Data":"263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff"} Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.307806 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.327441 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.341468 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.341526 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.341550 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.341578 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.341635 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:14Z","lastTransitionTime":"2026-01-28T15:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.346670 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.370542 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.392082 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.414312 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jp46k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64aa044d-1eb6-4e5f-9c12-96ba346374fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jp46k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.435321 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.445463 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.445524 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.445543 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.445570 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.445613 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:14Z","lastTransitionTime":"2026-01-28T15:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.456268 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.475736 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.502936 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b6f7eee1e6d2b74498a169f800f46fdc9eb55fb935becd848ea2494e7a9f317\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"message\\\":\\\"ing reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:18:01.356733 6311 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:18:01.356819 6311 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 15:18:01.356829 6311 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 15:18:01.356850 6311 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:18:01.356869 6311 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:18:01.356879 6311 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:18:01.356905 6311 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:18:01.356905 6311 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:18:01.356923 6311 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:18:01.356937 6311 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:18:01.356942 6311 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:18:01.356947 6311 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:18:01.356961 6311 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:18:01.356968 6311 factory.go:656] Stopping watch factory\\\\nI0128 15:18:01.357025 6311 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15:18:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.542283 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.548354 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.548426 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.548444 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.548476 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.548495 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:14Z","lastTransitionTime":"2026-01-28T15:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.567376 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.584528 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.599121 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.613899 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.629757 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.645467 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.651068 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.651134 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.651151 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.651177 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.651196 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:14Z","lastTransitionTime":"2026-01-28T15:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.657361 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa280ea6-1d32-4098-be1f-b7314f1a0576\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cfcf58f96119fd03a37cc47985518ce33dd48984718cc2968b6ebb43bd6913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd7df8770de3b5fe13ad0ac02c6a85f864af968e4ef04084b30f7f566736090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7t965\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.754312 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.754367 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.754377 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.754395 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.754406 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:14Z","lastTransitionTime":"2026-01-28T15:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.856743 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.856907 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.856934 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.857000 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.857022 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:14Z","lastTransitionTime":"2026-01-28T15:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.862709 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:56:52.634050915 +0000 UTC Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.903561 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.903624 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:14 crc kubenswrapper[4871]: E0128 15:18:14.903714 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:14 crc kubenswrapper[4871]: E0128 15:18:14.903816 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.960032 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.960066 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.960077 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.960094 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:14 crc kubenswrapper[4871]: I0128 15:18:14.960106 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:14Z","lastTransitionTime":"2026-01-28T15:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.069837 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.069902 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.069920 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.069945 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.069964 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:15Z","lastTransitionTime":"2026-01-28T15:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.149080 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.149181 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.149193 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.149235 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.149251 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:15Z","lastTransitionTime":"2026-01-28T15:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:15 crc kubenswrapper[4871]: E0128 15:18:15.163113 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.168122 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.168171 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.168209 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.168226 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.168235 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:15Z","lastTransitionTime":"2026-01-28T15:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:15 crc kubenswrapper[4871]: E0128 15:18:15.188203 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.194226 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.194304 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.194324 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.194353 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.194374 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:15Z","lastTransitionTime":"2026-01-28T15:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:15 crc kubenswrapper[4871]: E0128 15:18:15.215280 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.220678 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.220739 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.220750 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.220777 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.220794 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:15Z","lastTransitionTime":"2026-01-28T15:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:15 crc kubenswrapper[4871]: E0128 15:18:15.240742 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.245633 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.245698 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.245712 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.245732 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.245745 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:15Z","lastTransitionTime":"2026-01-28T15:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:15 crc kubenswrapper[4871]: E0128 15:18:15.261985 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:15 crc kubenswrapper[4871]: E0128 15:18:15.262111 4871 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.263958 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.263992 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.264002 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.264037 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.264049 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:15Z","lastTransitionTime":"2026-01-28T15:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.313546 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovnkube-controller/2.log" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.314551 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovnkube-controller/1.log" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.318540 4871 generic.go:334] "Generic (PLEG): container finished" podID="178343c8-b657-4440-953e-6daef3609145" containerID="263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff" exitCode=1 Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.318639 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerDied","Data":"263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff"} Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.318731 4871 scope.go:117] "RemoveContainer" containerID="1b6f7eee1e6d2b74498a169f800f46fdc9eb55fb935becd848ea2494e7a9f317" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.319965 4871 scope.go:117] "RemoveContainer" containerID="263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff" Jan 28 15:18:15 crc kubenswrapper[4871]: E0128 15:18:15.320436 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fn5bb_openshift-ovn-kubernetes(178343c8-b657-4440-953e-6daef3609145)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" podUID="178343c8-b657-4440-953e-6daef3609145" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.342415 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.362730 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.366484 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.366511 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.366521 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.366541 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.366552 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:15Z","lastTransitionTime":"2026-01-28T15:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.385335 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.398807 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.412790 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jp46k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64aa044d-1eb6-4e5f-9c12-96ba346374fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jp46k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.430471 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.448743 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.462486 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.470188 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.470245 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.470270 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.470303 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.470327 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:15Z","lastTransitionTime":"2026-01-28T15:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.494098 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b6f7eee1e6d2b74498a169f800f46fdc9eb55fb935becd848ea2494e7a9f317\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"message\\\":\\\"ing reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:18:01.356733 6311 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:18:01.356819 6311 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 15:18:01.356829 6311 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 15:18:01.356850 6311 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:18:01.356869 6311 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:18:01.356879 6311 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:18:01.356905 6311 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:18:01.356905 6311 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:18:01.356923 6311 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:18:01.356937 6311 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:18:01.356942 6311 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:18:01.356947 6311 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:18:01.356961 6311 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:18:01.356968 6311 factory.go:656] Stopping watch factory\\\\nI0128 15:18:01.357025 6311 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15:18:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:14Z\\\",\\\"message\\\":\\\"rnalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0128 15:18:14.984464 6544 services_controller.go:444] Built service openshift-kube-scheduler-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 15:18:14.984474 6544 services_controller.go:445] Built service openshift-kube-scheduler-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0128 15:18:14.984453 6544 services_controller.go:451] Built service openshift-route-controller-manager/route-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 15:18:14.983398 6544 default_network_controller.go:776] Recording success ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.528029 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.550197 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.568832 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.573515 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.573551 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.573562 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.573581 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.573621 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:15Z","lastTransitionTime":"2026-01-28T15:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.587079 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.601946 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.618449 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.632641 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.645189 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa280ea6-1d32-4098-be1f-b7314f1a0576\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cfcf58f96119fd03a37cc47985518ce33dd48984718cc2968b6ebb43bd6913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd7df8770de3b5fe13ad0ac02c6a85f864af968e4ef04084b30f7f566736090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7t965\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.678007 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.678345 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.678417 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.678489 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.678555 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:15Z","lastTransitionTime":"2026-01-28T15:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.781805 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.781852 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.781865 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.781883 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.781897 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:15Z","lastTransitionTime":"2026-01-28T15:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.863300 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 09:23:55.834135408 +0000 UTC Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.885198 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.885629 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.885831 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.885998 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.886129 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:15Z","lastTransitionTime":"2026-01-28T15:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.903868 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.903961 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:15 crc kubenswrapper[4871]: E0128 15:18:15.904246 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:15 crc kubenswrapper[4871]: E0128 15:18:15.904432 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.989200 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.989298 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.989319 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.989344 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:15 crc kubenswrapper[4871]: I0128 15:18:15.989365 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:15Z","lastTransitionTime":"2026-01-28T15:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.093500 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.093568 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.093624 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.093655 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.093681 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:16Z","lastTransitionTime":"2026-01-28T15:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.195726 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.195814 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.195840 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.195865 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.195884 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:16Z","lastTransitionTime":"2026-01-28T15:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.298190 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.299136 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.299328 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.299515 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.299710 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:16Z","lastTransitionTime":"2026-01-28T15:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.325982 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovnkube-controller/2.log" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.330920 4871 scope.go:117] "RemoveContainer" containerID="263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff" Jan 28 15:18:16 crc kubenswrapper[4871]: E0128 15:18:16.331115 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fn5bb_openshift-ovn-kubernetes(178343c8-b657-4440-953e-6daef3609145)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" podUID="178343c8-b657-4440-953e-6daef3609145" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.353903 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.387809 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.403916 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.403977 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.404001 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.404033 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.404056 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:16Z","lastTransitionTime":"2026-01-28T15:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.412500 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.433849 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.454352 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.469785 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.483951 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.496285 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa280ea6-1d32-4098-be1f-b7314f1a0576\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cfcf58f96119fd03a37cc47985518ce33dd48984718cc2968b6ebb43bd6913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd7df8770de3b5fe13ad0ac02c6a85f864af968e4ef04084b30f7f566736090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7t965\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.506939 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.506988 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.507006 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.507031 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.507051 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:16Z","lastTransitionTime":"2026-01-28T15:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.508199 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.523181 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jp46k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64aa044d-1eb6-4e5f-9c12-96ba346374fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jp46k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.540074 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.560939 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.577937 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.600122 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:14Z\\\",\\\"message\\\":\\\"rnalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0128 15:18:14.984464 6544 services_controller.go:444] Built service openshift-kube-scheduler-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 15:18:14.984474 6544 services_controller.go:445] Built service openshift-kube-scheduler-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0128 15:18:14.984453 6544 services_controller.go:451] Built service openshift-route-controller-manager/route-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 15:18:14.983398 6544 default_network_controller.go:776] Recording success ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fn5bb_openshift-ovn-kubernetes(178343c8-b657-4440-953e-6daef3609145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.610160 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.610273 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.610300 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.610335 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.610359 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:16Z","lastTransitionTime":"2026-01-28T15:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.622096 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.642966 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.659690 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.713667 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.713748 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.713768 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.713797 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.713826 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:16Z","lastTransitionTime":"2026-01-28T15:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.817365 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.817424 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.817440 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.817465 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.817482 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:16Z","lastTransitionTime":"2026-01-28T15:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.864288 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 04:45:05.497272705 +0000 UTC Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.903175 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:16 crc kubenswrapper[4871]: E0128 15:18:16.903359 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.903539 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:16 crc kubenswrapper[4871]: E0128 15:18:16.903782 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.921567 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.921699 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.921721 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.921748 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:16 crc kubenswrapper[4871]: I0128 15:18:16.921776 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:16Z","lastTransitionTime":"2026-01-28T15:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.025513 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.025929 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.026064 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.026256 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.026431 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:17Z","lastTransitionTime":"2026-01-28T15:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.129979 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.130034 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.130046 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.130064 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.130077 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:17Z","lastTransitionTime":"2026-01-28T15:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.233073 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.233136 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.233157 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.233183 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.233202 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:17Z","lastTransitionTime":"2026-01-28T15:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.337247 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.337289 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.337298 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.337312 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.337320 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:17Z","lastTransitionTime":"2026-01-28T15:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.440072 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.440132 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.440149 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.440174 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.440191 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:17Z","lastTransitionTime":"2026-01-28T15:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.543177 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.543237 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.543257 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.543279 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.543296 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:17Z","lastTransitionTime":"2026-01-28T15:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.646569 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.646674 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.646698 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.646729 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.646756 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:17Z","lastTransitionTime":"2026-01-28T15:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.749876 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.749964 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.750007 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.750042 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.750069 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:17Z","lastTransitionTime":"2026-01-28T15:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.853654 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.853701 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.853711 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.853731 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.853743 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:17Z","lastTransitionTime":"2026-01-28T15:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.865263 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 09:10:26.174618815 +0000 UTC Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.903156 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.903197 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:17 crc kubenswrapper[4871]: E0128 15:18:17.903332 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:17 crc kubenswrapper[4871]: E0128 15:18:17.903477 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.956907 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.956971 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.956995 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.957022 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:17 crc kubenswrapper[4871]: I0128 15:18:17.957045 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:17Z","lastTransitionTime":"2026-01-28T15:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.059789 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.059825 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.059835 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.059850 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.059860 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:18Z","lastTransitionTime":"2026-01-28T15:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.163155 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.163207 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.163220 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.163240 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.163253 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:18Z","lastTransitionTime":"2026-01-28T15:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.266113 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.266197 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.266221 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.266257 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.266282 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:18Z","lastTransitionTime":"2026-01-28T15:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.308230 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.322365 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.335958 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.355327 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.369721 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.369875 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.369888 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.369905 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.369918 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:18Z","lastTransitionTime":"2026-01-28T15:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.382539 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.394193 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.409144 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jp46k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64aa044d-1eb6-4e5f-9c12-96ba346374fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jp46k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.426302 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.441545 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.454027 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.472120 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.472169 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.472182 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.472200 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.472212 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:18Z","lastTransitionTime":"2026-01-28T15:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.474172 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:14Z\\\",\\\"message\\\":\\\"rnalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0128 15:18:14.984464 6544 services_controller.go:444] Built service openshift-kube-scheduler-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 15:18:14.984474 6544 services_controller.go:445] Built service openshift-kube-scheduler-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0128 15:18:14.984453 6544 services_controller.go:451] Built service openshift-route-controller-manager/route-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 15:18:14.983398 6544 default_network_controller.go:776] Recording success ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fn5bb_openshift-ovn-kubernetes(178343c8-b657-4440-953e-6daef3609145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.495836 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.508465 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.520712 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.534357 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.549745 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.567788 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.574270 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.574346 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.574366 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.574397 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.574416 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:18Z","lastTransitionTime":"2026-01-28T15:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.581827 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.594366 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa280ea6-1d32-4098-be1f-b7314f1a0576\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cfcf58f96119fd03a37cc47985518ce33dd48984718cc2968b6ebb43bd6913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd7df8770de3b5fe13ad0ac02c6a85f864af968e4ef04084b30f7f566736090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7t965\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.676828 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.676863 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.676872 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.676887 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.676896 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:18Z","lastTransitionTime":"2026-01-28T15:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.779097 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.779150 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.779169 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.779195 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.779212 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:18Z","lastTransitionTime":"2026-01-28T15:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.866477 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 23:21:30.884471394 +0000 UTC Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.882535 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.882619 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.882638 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.882661 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.882680 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:18Z","lastTransitionTime":"2026-01-28T15:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.903233 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.903288 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:18 crc kubenswrapper[4871]: E0128 15:18:18.903394 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:18 crc kubenswrapper[4871]: E0128 15:18:18.903523 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.928224 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.945573 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.964679 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jp46k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64aa044d-1eb6-4e5f-9c12-96ba346374fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jp46k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.985936 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.985988 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.986006 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.986032 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.986053 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:18Z","lastTransitionTime":"2026-01-28T15:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:18 crc kubenswrapper[4871]: I0128 15:18:18.988379 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.009879 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479d0f29-9dce-41a2-9b1a-75e157c26a03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a2644afc15f39effb1daa734bb53944ad9c7e05e72fbf8e8627879b5cc6c473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c31213dea27da9a677ea193b8780cc0e632e1263829731856bf682a3e1853e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd606211ea6e58c66c05dd5ecaf3a0c220b19bc7c82d1ea8d298ae82bd1b675e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a553dce0e11fd395cef308e3ec6756e7670462597b7eb183a4efbb1e0f638398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a553dce0e11fd395cef308e3ec6756e7670462597b7eb183a4efbb1e0f638398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.027553 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.045122 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.076305 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:14Z\\\",\\\"message\\\":\\\"rnalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0128 15:18:14.984464 6544 services_controller.go:444] Built service openshift-kube-scheduler-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 15:18:14.984474 6544 services_controller.go:445] Built service openshift-kube-scheduler-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0128 15:18:14.984453 6544 services_controller.go:451] Built service openshift-route-controller-manager/route-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 15:18:14.983398 6544 default_network_controller.go:776] Recording success ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fn5bb_openshift-ovn-kubernetes(178343c8-b657-4440-953e-6daef3609145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.088940 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.089014 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.089037 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.089068 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.089091 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:19Z","lastTransitionTime":"2026-01-28T15:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.099625 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.114395 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.126788 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.136058 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs\") pod \"network-metrics-daemon-jp46k\" (UID: \"64aa044d-1eb6-4e5f-9c12-96ba346374fa\") " pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:19 crc kubenswrapper[4871]: E0128 15:18:19.136344 4871 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:18:19 crc kubenswrapper[4871]: E0128 15:18:19.136466 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs podName:64aa044d-1eb6-4e5f-9c12-96ba346374fa nodeName:}" failed. No retries permitted until 2026-01-28 15:18:35.136413532 +0000 UTC m=+67.032251894 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs") pod "network-metrics-daemon-jp46k" (UID: "64aa044d-1eb6-4e5f-9c12-96ba346374fa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.138929 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.161543 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.176171 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.192212 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.192287 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.192303 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.192328 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.192345 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:19Z","lastTransitionTime":"2026-01-28T15:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.194353 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.209899 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.225700 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.238610 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa280ea6-1d32-4098-be1f-b7314f1a0576\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cfcf58f96119fd03a37cc47985518ce33dd48984718cc2968b6ebb43bd6913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd7df8770de3b5fe13ad0ac02c6a85f864af968e4ef04084b30f7f566736090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7t965\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.295420 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.295486 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.295507 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.295536 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.295555 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:19Z","lastTransitionTime":"2026-01-28T15:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.398580 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.398667 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.398678 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.398700 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.398711 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:19Z","lastTransitionTime":"2026-01-28T15:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.501329 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.501392 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.501412 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.501438 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.501457 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:19Z","lastTransitionTime":"2026-01-28T15:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.605120 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.605186 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.605204 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.605226 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.605245 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:19Z","lastTransitionTime":"2026-01-28T15:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.708869 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.708937 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.708956 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.709037 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.709065 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:19Z","lastTransitionTime":"2026-01-28T15:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.812118 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.812191 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.812212 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.812238 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.812254 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:19Z","lastTransitionTime":"2026-01-28T15:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.867421 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 17:56:02.201392112 +0000 UTC Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.903979 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.904045 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:19 crc kubenswrapper[4871]: E0128 15:18:19.904173 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:19 crc kubenswrapper[4871]: E0128 15:18:19.904294 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.916910 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.917400 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.917501 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.917569 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:19 crc kubenswrapper[4871]: I0128 15:18:19.917681 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:19Z","lastTransitionTime":"2026-01-28T15:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.020106 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.020178 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.020203 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.020231 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.020250 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:20Z","lastTransitionTime":"2026-01-28T15:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.123499 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.123544 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.123554 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.123568 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.123577 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:20Z","lastTransitionTime":"2026-01-28T15:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.148109 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.148283 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:20 crc kubenswrapper[4871]: E0128 15:18:20.148344 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:18:52.148303577 +0000 UTC m=+84.044141909 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.148405 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:20 crc kubenswrapper[4871]: E0128 15:18:20.148451 4871 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:18:20 crc kubenswrapper[4871]: E0128 15:18:20.148475 4871 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.148470 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:20 crc kubenswrapper[4871]: E0128 15:18:20.148528 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:18:52.148504133 +0000 UTC m=+84.044342495 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:18:20 crc kubenswrapper[4871]: E0128 15:18:20.148570 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:18:52.148556745 +0000 UTC m=+84.044395077 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:18:20 crc kubenswrapper[4871]: E0128 15:18:20.148619 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:18:20 crc kubenswrapper[4871]: E0128 15:18:20.148638 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.148642 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:20 crc kubenswrapper[4871]: E0128 15:18:20.148652 4871 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:18:20 crc kubenswrapper[4871]: E0128 15:18:20.148712 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:18:20 crc kubenswrapper[4871]: E0128 15:18:20.148724 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:18:20 crc kubenswrapper[4871]: E0128 15:18:20.148734 4871 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:18:20 crc kubenswrapper[4871]: E0128 15:18:20.148723 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:18:52.14871375 +0000 UTC m=+84.044552092 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:18:20 crc kubenswrapper[4871]: E0128 15:18:20.148792 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:18:52.148782552 +0000 UTC m=+84.044620994 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.226805 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.226998 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.227034 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.227117 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.227145 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:20Z","lastTransitionTime":"2026-01-28T15:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.329897 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.329946 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.329966 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.329991 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.330008 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:20Z","lastTransitionTime":"2026-01-28T15:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.432611 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.432701 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.432757 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.432788 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.432808 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:20Z","lastTransitionTime":"2026-01-28T15:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.536223 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.536304 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.536344 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.536376 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.536394 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:20Z","lastTransitionTime":"2026-01-28T15:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.639518 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.639580 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.639622 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.639643 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.639659 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:20Z","lastTransitionTime":"2026-01-28T15:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.742175 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.742552 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.742566 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.742612 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.742625 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:20Z","lastTransitionTime":"2026-01-28T15:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.845683 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.845764 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.845787 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.845812 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.845832 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:20Z","lastTransitionTime":"2026-01-28T15:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.867577 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 19:46:38.529598046 +0000 UTC Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.903351 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:20 crc kubenswrapper[4871]: E0128 15:18:20.903618 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.903716 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:20 crc kubenswrapper[4871]: E0128 15:18:20.903907 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.949790 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.949913 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.949933 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.949961 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:20 crc kubenswrapper[4871]: I0128 15:18:20.949978 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:20Z","lastTransitionTime":"2026-01-28T15:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.053512 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.053575 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.053630 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.053657 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.053675 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:21Z","lastTransitionTime":"2026-01-28T15:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.157063 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.157137 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.157155 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.157179 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.157196 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:21Z","lastTransitionTime":"2026-01-28T15:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.260351 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.260414 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.260433 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.260458 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.260476 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:21Z","lastTransitionTime":"2026-01-28T15:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.363915 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.364007 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.364069 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.364099 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.364118 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:21Z","lastTransitionTime":"2026-01-28T15:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.467629 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.467680 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.467694 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.467716 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.467728 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:21Z","lastTransitionTime":"2026-01-28T15:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.570998 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.571055 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.571070 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.571088 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.571102 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:21Z","lastTransitionTime":"2026-01-28T15:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.676277 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.676348 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.676358 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.676373 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.676383 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:21Z","lastTransitionTime":"2026-01-28T15:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.778931 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.778988 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.778998 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.779015 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.779026 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:21Z","lastTransitionTime":"2026-01-28T15:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.868057 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 13:36:56.953007184 +0000 UTC Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.882323 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.882393 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.882413 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.882441 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.882462 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:21Z","lastTransitionTime":"2026-01-28T15:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.903332 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.903423 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:21 crc kubenswrapper[4871]: E0128 15:18:21.903501 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:21 crc kubenswrapper[4871]: E0128 15:18:21.903680 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.986206 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.986278 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.986300 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.986332 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:21 crc kubenswrapper[4871]: I0128 15:18:21.986357 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:21Z","lastTransitionTime":"2026-01-28T15:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.089957 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.090018 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.090029 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.090050 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.090062 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:22Z","lastTransitionTime":"2026-01-28T15:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.135320 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.154524 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.171558 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.187339 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa280ea6-1d32-4098-be1f-b7314f1a0576\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cfcf58f96119fd03a37cc47985518ce33dd48984718cc2968b6ebb43bd6913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd7df8770de3b5fe13ad0ac02c6a85f864af968e4ef04084b30f7f566736090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7t965\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.192720 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.192755 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.192765 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.192782 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.192793 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:22Z","lastTransitionTime":"2026-01-28T15:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.204569 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.216943 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479d0f29-9dce-41a2-9b1a-75e157c26a03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a2644afc15f39effb1daa734bb53944ad9c7e05e72fbf8e8627879b5cc6c473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c31213dea27da9a677ea193b8780cc0e632e1263829731856bf682a3e1853e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd606211ea6e58c66c05dd5ecaf3a0c220b19bc7c82d1ea8d298ae82bd1b675e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a553dce0e11fd395cef308e3ec6756e7670462597b7eb183a4efbb1e0f638398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a553dce0e11fd395cef308e3ec6756e7670462597b7eb183a4efbb1e0f638398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.234116 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.250870 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.261957 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.271378 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jp46k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64aa044d-1eb6-4e5f-9c12-96ba346374fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jp46k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.287639 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.296209 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.296255 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.296266 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.296289 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.296304 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:22Z","lastTransitionTime":"2026-01-28T15:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.305399 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.317718 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.340044 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:14Z\\\",\\\"message\\\":\\\"rnalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0128 15:18:14.984464 6544 services_controller.go:444] Built service openshift-kube-scheduler-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 15:18:14.984474 6544 services_controller.go:445] Built service openshift-kube-scheduler-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0128 15:18:14.984453 6544 services_controller.go:451] Built service openshift-route-controller-manager/route-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 15:18:14.983398 6544 default_network_controller.go:776] Recording success ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fn5bb_openshift-ovn-kubernetes(178343c8-b657-4440-953e-6daef3609145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.361785 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.374553 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.393136 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.402191 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.402237 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.402250 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.402275 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.402290 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:22Z","lastTransitionTime":"2026-01-28T15:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.409740 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.427565 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.505646 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.505694 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.505706 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.505734 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.505753 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:22Z","lastTransitionTime":"2026-01-28T15:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.629411 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.629450 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.629463 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.629482 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.629493 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:22Z","lastTransitionTime":"2026-01-28T15:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.741417 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.741472 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.741483 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.741503 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.741515 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:22Z","lastTransitionTime":"2026-01-28T15:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.844669 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.844724 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.844741 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.844761 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.844773 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:22Z","lastTransitionTime":"2026-01-28T15:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.869058 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 04:29:28.252737026 +0000 UTC Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.903135 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:22 crc kubenswrapper[4871]: E0128 15:18:22.903338 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.903581 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:22 crc kubenswrapper[4871]: E0128 15:18:22.903910 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.947465 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.947803 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.947897 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.948028 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:22 crc kubenswrapper[4871]: I0128 15:18:22.948138 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:22Z","lastTransitionTime":"2026-01-28T15:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.051177 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.051270 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.051303 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.051343 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.051367 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:23Z","lastTransitionTime":"2026-01-28T15:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.153984 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.154062 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.154082 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.154111 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.154131 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:23Z","lastTransitionTime":"2026-01-28T15:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.256729 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.256794 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.256808 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.256826 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.256855 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:23Z","lastTransitionTime":"2026-01-28T15:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.359049 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.359094 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.359110 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.359132 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.359147 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:23Z","lastTransitionTime":"2026-01-28T15:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.462557 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.462645 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.462658 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.462682 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.463020 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:23Z","lastTransitionTime":"2026-01-28T15:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.565534 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.565647 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.565667 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.565697 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.565721 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:23Z","lastTransitionTime":"2026-01-28T15:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.670866 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.670935 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.670958 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.670990 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.671012 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:23Z","lastTransitionTime":"2026-01-28T15:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.774218 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.774281 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.774299 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.774322 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.774341 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:23Z","lastTransitionTime":"2026-01-28T15:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.869838 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 14:47:46.115790314 +0000 UTC Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.877484 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.877555 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.877574 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.877632 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.877651 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:23Z","lastTransitionTime":"2026-01-28T15:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.903943 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.904059 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:23 crc kubenswrapper[4871]: E0128 15:18:23.904356 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:23 crc kubenswrapper[4871]: E0128 15:18:23.904491 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.980857 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.980919 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.980949 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.980986 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:23 crc kubenswrapper[4871]: I0128 15:18:23.981013 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:23Z","lastTransitionTime":"2026-01-28T15:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.083957 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.084024 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.084042 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.084071 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.084089 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:24Z","lastTransitionTime":"2026-01-28T15:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.187926 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.187998 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.188022 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.188056 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.188081 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:24Z","lastTransitionTime":"2026-01-28T15:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.292001 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.292064 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.292080 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.292101 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.292118 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:24Z","lastTransitionTime":"2026-01-28T15:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.395006 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.395079 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.395100 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.395127 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.395148 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:24Z","lastTransitionTime":"2026-01-28T15:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.498818 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.498891 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.498909 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.498935 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.498953 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:24Z","lastTransitionTime":"2026-01-28T15:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.602085 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.602777 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.602812 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.602845 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.602870 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:24Z","lastTransitionTime":"2026-01-28T15:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.705656 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.706031 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.706173 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.706319 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.706441 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:24Z","lastTransitionTime":"2026-01-28T15:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.809444 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.809479 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.809490 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.809503 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.809515 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:24Z","lastTransitionTime":"2026-01-28T15:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.870543 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 21:16:14.069810897 +0000 UTC Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.903423 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.903501 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:24 crc kubenswrapper[4871]: E0128 15:18:24.903545 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:24 crc kubenswrapper[4871]: E0128 15:18:24.903685 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.914735 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.914778 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.914799 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.914819 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:24 crc kubenswrapper[4871]: I0128 15:18:24.914833 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:24Z","lastTransitionTime":"2026-01-28T15:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.017806 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.017904 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.017959 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.017982 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.017999 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:25Z","lastTransitionTime":"2026-01-28T15:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.121554 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.121642 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.121661 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.121684 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.121702 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:25Z","lastTransitionTime":"2026-01-28T15:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.225476 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.225545 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.225563 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.225583 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.225618 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:25Z","lastTransitionTime":"2026-01-28T15:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.329405 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.329448 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.329460 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.329476 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.329489 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:25Z","lastTransitionTime":"2026-01-28T15:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.341880 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.341946 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.341970 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.342001 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.342023 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:25Z","lastTransitionTime":"2026-01-28T15:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:25 crc kubenswrapper[4871]: E0128 15:18:25.364168 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.369090 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.369225 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.369250 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.369279 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.369303 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:25Z","lastTransitionTime":"2026-01-28T15:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:25 crc kubenswrapper[4871]: E0128 15:18:25.389845 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.395273 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.395321 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.395343 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.395376 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.395399 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:25Z","lastTransitionTime":"2026-01-28T15:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:25 crc kubenswrapper[4871]: E0128 15:18:25.417663 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.423387 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.423454 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.423476 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.423504 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.423522 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:25Z","lastTransitionTime":"2026-01-28T15:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:25 crc kubenswrapper[4871]: E0128 15:18:25.441075 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.446198 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.446247 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.446271 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.446335 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.446350 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:25Z","lastTransitionTime":"2026-01-28T15:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:25 crc kubenswrapper[4871]: E0128 15:18:25.459641 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:25 crc kubenswrapper[4871]: E0128 15:18:25.459805 4871 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.461511 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.461581 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.461629 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.461645 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.461654 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:25Z","lastTransitionTime":"2026-01-28T15:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.564896 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.564983 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.565004 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.565036 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.565059 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:25Z","lastTransitionTime":"2026-01-28T15:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.668761 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.668832 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.668901 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.668934 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.668956 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:25Z","lastTransitionTime":"2026-01-28T15:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.773839 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.773911 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.774008 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.774104 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.774129 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:25Z","lastTransitionTime":"2026-01-28T15:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.871250 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 15:01:35.943110954 +0000 UTC Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.876691 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.876738 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.876750 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.876767 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.876784 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:25Z","lastTransitionTime":"2026-01-28T15:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.903405 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.903499 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:25 crc kubenswrapper[4871]: E0128 15:18:25.903656 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:25 crc kubenswrapper[4871]: E0128 15:18:25.903710 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.979156 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.979211 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.979228 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.979260 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:25 crc kubenswrapper[4871]: I0128 15:18:25.979276 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:25Z","lastTransitionTime":"2026-01-28T15:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.083372 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.083410 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.083419 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.083436 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.083445 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:26Z","lastTransitionTime":"2026-01-28T15:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.186716 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.186780 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.186797 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.186820 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.186837 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:26Z","lastTransitionTime":"2026-01-28T15:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.289291 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.289332 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.289341 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.289356 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.289366 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:26Z","lastTransitionTime":"2026-01-28T15:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.392259 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.392326 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.392346 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.392371 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.392390 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:26Z","lastTransitionTime":"2026-01-28T15:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.495977 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.496042 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.496062 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.496089 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.496108 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:26Z","lastTransitionTime":"2026-01-28T15:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.599118 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.599190 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.599212 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.599242 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.599263 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:26Z","lastTransitionTime":"2026-01-28T15:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.701740 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.701813 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.701837 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.701867 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.701890 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:26Z","lastTransitionTime":"2026-01-28T15:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.804302 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.804338 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.804347 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.804361 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.804370 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:26Z","lastTransitionTime":"2026-01-28T15:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.871429 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 23:56:58.431712512 +0000 UTC Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.903129 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.903149 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:26 crc kubenswrapper[4871]: E0128 15:18:26.903366 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:26 crc kubenswrapper[4871]: E0128 15:18:26.903455 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.907843 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.907916 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.907937 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.907962 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:26 crc kubenswrapper[4871]: I0128 15:18:26.907983 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:26Z","lastTransitionTime":"2026-01-28T15:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.024299 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.024364 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.024383 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.024407 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.024422 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:27Z","lastTransitionTime":"2026-01-28T15:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.127063 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.127134 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.127181 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.127216 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.127240 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:27Z","lastTransitionTime":"2026-01-28T15:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.229683 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.229743 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.229759 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.229782 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.229803 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:27Z","lastTransitionTime":"2026-01-28T15:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.332555 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.332698 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.332723 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.332753 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.332775 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:27Z","lastTransitionTime":"2026-01-28T15:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.436106 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.436167 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.436185 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.436208 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.436225 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:27Z","lastTransitionTime":"2026-01-28T15:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.540024 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.540089 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.540113 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.540141 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.540165 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:27Z","lastTransitionTime":"2026-01-28T15:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.643820 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.643977 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.644002 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.644034 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.644061 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:27Z","lastTransitionTime":"2026-01-28T15:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.747254 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.747313 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.747331 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.747372 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.747391 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:27Z","lastTransitionTime":"2026-01-28T15:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.850907 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.850960 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.850977 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.851004 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.851023 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:27Z","lastTransitionTime":"2026-01-28T15:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.872651 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 12:44:24.407866882 +0000 UTC Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.903094 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.903123 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:27 crc kubenswrapper[4871]: E0128 15:18:27.903277 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:27 crc kubenswrapper[4871]: E0128 15:18:27.903488 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.953887 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.953949 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.953969 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.953994 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:27 crc kubenswrapper[4871]: I0128 15:18:27.954014 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:27Z","lastTransitionTime":"2026-01-28T15:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.057610 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.057659 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.057672 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.057691 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.057704 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:28Z","lastTransitionTime":"2026-01-28T15:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.161455 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.161629 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.161657 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.161690 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.161715 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:28Z","lastTransitionTime":"2026-01-28T15:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.264175 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.264219 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.264229 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.264244 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.264259 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:28Z","lastTransitionTime":"2026-01-28T15:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.367517 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.367583 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.367633 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.367660 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.367680 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:28Z","lastTransitionTime":"2026-01-28T15:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.471099 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.471153 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.471164 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.471183 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.471198 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:28Z","lastTransitionTime":"2026-01-28T15:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.574249 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.574304 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.574316 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.574333 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.574345 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:28Z","lastTransitionTime":"2026-01-28T15:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.676418 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.676458 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.676499 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.676516 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.676529 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:28Z","lastTransitionTime":"2026-01-28T15:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.780039 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.780084 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.780092 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.780105 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.780115 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:28Z","lastTransitionTime":"2026-01-28T15:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.872920 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 23:29:47.959200596 +0000 UTC Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.883025 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.883082 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.883100 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.883123 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.883141 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:28Z","lastTransitionTime":"2026-01-28T15:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.902878 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.902937 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:28 crc kubenswrapper[4871]: E0128 15:18:28.903019 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:28 crc kubenswrapper[4871]: E0128 15:18:28.903118 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.921637 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa280ea6-1d32-4098-be1f-b7314f1a0576\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cfcf58f96119fd03a37cc47985518ce33dd48984718cc2968b6ebb43bd6913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd7df8770de3b5fe13ad0ac02c6a85f864af968e4ef04084b30f7f566736090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7t965\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.939113 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.959416 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.979767 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.986045 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.986097 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.986116 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.986141 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:28 crc kubenswrapper[4871]: I0128 15:18:28.986158 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:28Z","lastTransitionTime":"2026-01-28T15:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.007532 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.022571 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.038935 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jp46k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64aa044d-1eb6-4e5f-9c12-96ba346374fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jp46k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.059138 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.078057 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479d0f29-9dce-41a2-9b1a-75e157c26a03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a2644afc15f39effb1daa734bb53944ad9c7e05e72fbf8e8627879b5cc6c473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c31213dea27da9a677ea193b8780cc0e632e1263829731856bf682a3e1853e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd606211ea6e58c66c05dd5ecaf3a0c220b19bc7c82d1ea8d298ae82bd1b675e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a553dce0e11fd395cef308e3ec6756e7670462597b7eb183a4efbb1e0f638398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a553dce0e11fd395cef308e3ec6756e7670462597b7eb183a4efbb1e0f638398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.088618 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.088667 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.088683 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.088706 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.088723 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:29Z","lastTransitionTime":"2026-01-28T15:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.097911 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.115029 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.147461 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:14Z\\\",\\\"message\\\":\\\"rnalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0128 15:18:14.984464 6544 services_controller.go:444] Built service openshift-kube-scheduler-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 15:18:14.984474 6544 services_controller.go:445] Built service openshift-kube-scheduler-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0128 15:18:14.984453 6544 services_controller.go:451] Built service openshift-route-controller-manager/route-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 15:18:14.983398 6544 default_network_controller.go:776] Recording success ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fn5bb_openshift-ovn-kubernetes(178343c8-b657-4440-953e-6daef3609145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.167371 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.188283 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.191336 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.191395 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.191419 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.191449 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.191473 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:29Z","lastTransitionTime":"2026-01-28T15:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.209403 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.230844 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.253915 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.274032 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.294192 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.294233 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.294244 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.294260 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.294272 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:29Z","lastTransitionTime":"2026-01-28T15:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.396323 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.396380 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.396396 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.396413 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.396427 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:29Z","lastTransitionTime":"2026-01-28T15:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.498205 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.498482 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.498495 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.498511 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.498525 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:29Z","lastTransitionTime":"2026-01-28T15:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.600882 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.600937 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.600954 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.600978 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.600997 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:29Z","lastTransitionTime":"2026-01-28T15:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.703895 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.703942 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.703954 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.703971 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.703982 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:29Z","lastTransitionTime":"2026-01-28T15:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.806352 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.806405 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.806419 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.806439 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.806451 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:29Z","lastTransitionTime":"2026-01-28T15:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.873773 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 10:47:53.163394966 +0000 UTC Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.903534 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.903628 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:29 crc kubenswrapper[4871]: E0128 15:18:29.903756 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:29 crc kubenswrapper[4871]: E0128 15:18:29.904231 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.904558 4871 scope.go:117] "RemoveContainer" containerID="263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff" Jan 28 15:18:29 crc kubenswrapper[4871]: E0128 15:18:29.904798 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fn5bb_openshift-ovn-kubernetes(178343c8-b657-4440-953e-6daef3609145)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" podUID="178343c8-b657-4440-953e-6daef3609145" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.909491 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.909532 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.909553 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.909571 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:29 crc kubenswrapper[4871]: I0128 15:18:29.909584 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:29Z","lastTransitionTime":"2026-01-28T15:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.012318 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.012426 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.012452 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.012481 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.012503 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:30Z","lastTransitionTime":"2026-01-28T15:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.116062 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.116113 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.116125 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.116145 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.116157 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:30Z","lastTransitionTime":"2026-01-28T15:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.218454 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.218482 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.218491 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.218503 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.218511 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:30Z","lastTransitionTime":"2026-01-28T15:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.320887 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.320943 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.320960 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.320980 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.320992 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:30Z","lastTransitionTime":"2026-01-28T15:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.422788 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.422864 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.422874 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.422891 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.422900 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:30Z","lastTransitionTime":"2026-01-28T15:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.524758 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.524784 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.524795 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.524807 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.524816 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:30Z","lastTransitionTime":"2026-01-28T15:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.627290 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.627321 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.627329 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.627344 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.627353 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:30Z","lastTransitionTime":"2026-01-28T15:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.731208 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.731260 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.731274 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.731290 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.731303 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:30Z","lastTransitionTime":"2026-01-28T15:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.833843 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.833886 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.833899 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.833914 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.833926 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:30Z","lastTransitionTime":"2026-01-28T15:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.874802 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 06:53:09.999532168 +0000 UTC Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.903553 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.903657 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:30 crc kubenswrapper[4871]: E0128 15:18:30.903844 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:30 crc kubenswrapper[4871]: E0128 15:18:30.903970 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.936515 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.936645 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.936671 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.936695 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:30 crc kubenswrapper[4871]: I0128 15:18:30.936714 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:30Z","lastTransitionTime":"2026-01-28T15:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.039521 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.039572 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.039582 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.039623 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.039639 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:31Z","lastTransitionTime":"2026-01-28T15:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.142126 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.142166 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.142176 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.142191 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.142201 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:31Z","lastTransitionTime":"2026-01-28T15:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.244130 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.244178 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.244189 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.244205 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.244215 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:31Z","lastTransitionTime":"2026-01-28T15:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.347107 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.347155 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.347166 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.347184 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.347195 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:31Z","lastTransitionTime":"2026-01-28T15:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.450441 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.450506 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.450525 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.450551 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.450570 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:31Z","lastTransitionTime":"2026-01-28T15:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.553745 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.553795 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.553806 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.553826 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.553838 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:31Z","lastTransitionTime":"2026-01-28T15:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.657512 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.657563 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.657574 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.657627 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.657638 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:31Z","lastTransitionTime":"2026-01-28T15:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.760264 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.760322 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.760332 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.760350 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.760365 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:31Z","lastTransitionTime":"2026-01-28T15:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.864336 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.864538 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.864649 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.864727 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.864793 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:31Z","lastTransitionTime":"2026-01-28T15:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.875419 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 12:57:03.451385026 +0000 UTC Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.903135 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.903150 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:31 crc kubenswrapper[4871]: E0128 15:18:31.903289 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:31 crc kubenswrapper[4871]: E0128 15:18:31.903385 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.969666 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.969727 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.969739 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.969757 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:31 crc kubenswrapper[4871]: I0128 15:18:31.969770 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:31Z","lastTransitionTime":"2026-01-28T15:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.074875 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.074940 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.074954 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.074975 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.074989 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:32Z","lastTransitionTime":"2026-01-28T15:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.178476 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.178540 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.178555 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.178579 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.178610 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:32Z","lastTransitionTime":"2026-01-28T15:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.281447 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.281510 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.281527 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.281552 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.281573 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:32Z","lastTransitionTime":"2026-01-28T15:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.384654 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.384710 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.384721 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.384742 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.384754 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:32Z","lastTransitionTime":"2026-01-28T15:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.487623 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.487675 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.487684 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.487703 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.487716 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:32Z","lastTransitionTime":"2026-01-28T15:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.591341 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.591421 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.591444 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.591512 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.591539 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:32Z","lastTransitionTime":"2026-01-28T15:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.694722 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.694776 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.694794 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.694817 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.694831 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:32Z","lastTransitionTime":"2026-01-28T15:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.797346 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.797388 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.797396 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.797409 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.797420 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:32Z","lastTransitionTime":"2026-01-28T15:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.875960 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 07:13:42.426425265 +0000 UTC Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.900134 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.900201 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.900223 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.900253 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.900274 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:32Z","lastTransitionTime":"2026-01-28T15:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.903559 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:32 crc kubenswrapper[4871]: I0128 15:18:32.903633 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:32 crc kubenswrapper[4871]: E0128 15:18:32.903787 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:32 crc kubenswrapper[4871]: E0128 15:18:32.903942 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.002509 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.002603 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.002621 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.002646 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.002668 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:33Z","lastTransitionTime":"2026-01-28T15:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.105388 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.105434 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.105445 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.105461 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.105472 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:33Z","lastTransitionTime":"2026-01-28T15:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.208563 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.208669 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.208682 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.208698 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.208711 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:33Z","lastTransitionTime":"2026-01-28T15:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.312183 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.312261 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.312282 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.312308 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.312323 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:33Z","lastTransitionTime":"2026-01-28T15:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.414300 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.414356 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.414368 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.414393 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.414408 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:33Z","lastTransitionTime":"2026-01-28T15:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.517903 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.517948 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.517961 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.517978 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.517989 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:33Z","lastTransitionTime":"2026-01-28T15:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.621457 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.621514 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.621527 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.621552 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.621566 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:33Z","lastTransitionTime":"2026-01-28T15:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.725138 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.725192 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.725211 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.725233 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.725246 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:33Z","lastTransitionTime":"2026-01-28T15:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.827979 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.828032 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.828046 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.828067 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.828080 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:33Z","lastTransitionTime":"2026-01-28T15:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.876921 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 10:47:00.294649499 +0000 UTC Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.903393 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.903510 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:33 crc kubenswrapper[4871]: E0128 15:18:33.903551 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:33 crc kubenswrapper[4871]: E0128 15:18:33.903871 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.917913 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.930254 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.930295 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.930310 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.930329 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:33 crc kubenswrapper[4871]: I0128 15:18:33.930345 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:33Z","lastTransitionTime":"2026-01-28T15:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.032514 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.032571 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.032604 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.032627 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.032646 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:34Z","lastTransitionTime":"2026-01-28T15:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.135152 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.135226 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.135239 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.135263 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.135276 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:34Z","lastTransitionTime":"2026-01-28T15:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.238160 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.238217 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.238231 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.238248 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.238261 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:34Z","lastTransitionTime":"2026-01-28T15:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.341667 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.341716 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.341726 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.341739 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.341753 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:34Z","lastTransitionTime":"2026-01-28T15:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.444474 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.444518 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.444532 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.444547 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.444557 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:34Z","lastTransitionTime":"2026-01-28T15:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.547666 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.547717 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.547732 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.547753 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.547764 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:34Z","lastTransitionTime":"2026-01-28T15:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.650894 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.650935 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.650946 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.650961 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.650969 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:34Z","lastTransitionTime":"2026-01-28T15:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.753264 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.753344 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.753354 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.753368 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.753377 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:34Z","lastTransitionTime":"2026-01-28T15:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.855842 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.855888 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.855897 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.855916 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.855928 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:34Z","lastTransitionTime":"2026-01-28T15:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.877703 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 18:18:12.444530511 +0000 UTC Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.903298 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.903359 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:34 crc kubenswrapper[4871]: E0128 15:18:34.903495 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:34 crc kubenswrapper[4871]: E0128 15:18:34.903750 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.958342 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.958390 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.958399 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.958415 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:34 crc kubenswrapper[4871]: I0128 15:18:34.958426 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:34Z","lastTransitionTime":"2026-01-28T15:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.061198 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.061243 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.061254 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.061271 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.061282 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:35Z","lastTransitionTime":"2026-01-28T15:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.154866 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs\") pod \"network-metrics-daemon-jp46k\" (UID: \"64aa044d-1eb6-4e5f-9c12-96ba346374fa\") " pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:35 crc kubenswrapper[4871]: E0128 15:18:35.155028 4871 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:18:35 crc kubenswrapper[4871]: E0128 15:18:35.155086 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs podName:64aa044d-1eb6-4e5f-9c12-96ba346374fa nodeName:}" failed. No retries permitted until 2026-01-28 15:19:07.155071257 +0000 UTC m=+99.050909579 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs") pod "network-metrics-daemon-jp46k" (UID: "64aa044d-1eb6-4e5f-9c12-96ba346374fa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.163127 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.163161 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.163171 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.163188 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.163199 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:35Z","lastTransitionTime":"2026-01-28T15:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.266422 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.266473 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.266486 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.266508 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.266523 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:35Z","lastTransitionTime":"2026-01-28T15:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.369304 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.369340 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.369348 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.369362 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.369374 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:35Z","lastTransitionTime":"2026-01-28T15:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.472150 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.472197 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.472213 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.472235 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.472250 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:35Z","lastTransitionTime":"2026-01-28T15:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.574683 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.574734 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.574744 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.574761 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.574772 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:35Z","lastTransitionTime":"2026-01-28T15:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.676901 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.676946 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.676954 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.676967 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.676978 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:35Z","lastTransitionTime":"2026-01-28T15:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.713112 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.713156 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.713165 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.713182 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.713195 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:35Z","lastTransitionTime":"2026-01-28T15:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:35 crc kubenswrapper[4871]: E0128 15:18:35.727623 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.731352 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.731392 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.731401 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.731417 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.731437 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:35Z","lastTransitionTime":"2026-01-28T15:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:35 crc kubenswrapper[4871]: E0128 15:18:35.742574 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.745824 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.745872 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.745883 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.745898 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.745907 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:35Z","lastTransitionTime":"2026-01-28T15:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:35 crc kubenswrapper[4871]: E0128 15:18:35.758913 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.761871 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.761906 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.761918 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.761933 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.761946 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:35Z","lastTransitionTime":"2026-01-28T15:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:35 crc kubenswrapper[4871]: E0128 15:18:35.772221 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.775335 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.775360 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.775368 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.775382 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.775390 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:35Z","lastTransitionTime":"2026-01-28T15:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:35 crc kubenswrapper[4871]: E0128 15:18:35.787404 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:35 crc kubenswrapper[4871]: E0128 15:18:35.787527 4871 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.788873 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.788897 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.788904 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.788918 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.788927 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:35Z","lastTransitionTime":"2026-01-28T15:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.878712 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 15:46:48.621154894 +0000 UTC Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.891948 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.891977 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.891993 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.892006 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.892016 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:35Z","lastTransitionTime":"2026-01-28T15:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.903019 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.903071 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:35 crc kubenswrapper[4871]: E0128 15:18:35.903262 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:35 crc kubenswrapper[4871]: E0128 15:18:35.903445 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.994892 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.994977 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.994991 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.995014 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:35 crc kubenswrapper[4871]: I0128 15:18:35.995028 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:35Z","lastTransitionTime":"2026-01-28T15:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.097668 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.097718 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.097731 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.097747 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.097760 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:36Z","lastTransitionTime":"2026-01-28T15:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.199740 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.200186 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.200404 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.200639 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.200962 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:36Z","lastTransitionTime":"2026-01-28T15:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.303445 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.303473 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.303482 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.303496 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.303504 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:36Z","lastTransitionTime":"2026-01-28T15:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.405751 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.405809 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.405826 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.405848 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.405865 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:36Z","lastTransitionTime":"2026-01-28T15:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.508126 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.508179 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.508189 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.508205 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.508215 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:36Z","lastTransitionTime":"2026-01-28T15:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.610756 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.610847 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.610861 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.610885 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.610900 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:36Z","lastTransitionTime":"2026-01-28T15:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.713880 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.713940 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.713957 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.713981 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.714001 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:36Z","lastTransitionTime":"2026-01-28T15:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.816750 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.816793 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.816802 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.816817 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.816826 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:36Z","lastTransitionTime":"2026-01-28T15:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.878995 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 21:54:41.524167698 +0000 UTC Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.903304 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.903401 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:36 crc kubenswrapper[4871]: E0128 15:18:36.903430 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:36 crc kubenswrapper[4871]: E0128 15:18:36.903558 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.919774 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.919814 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.919827 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.919844 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:36 crc kubenswrapper[4871]: I0128 15:18:36.919856 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:36Z","lastTransitionTime":"2026-01-28T15:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.022912 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.022946 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.022957 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.022975 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.022987 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:37Z","lastTransitionTime":"2026-01-28T15:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.125582 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.125647 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.125670 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.125699 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.125739 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:37Z","lastTransitionTime":"2026-01-28T15:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.228658 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.228714 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.228728 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.228749 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.228766 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:37Z","lastTransitionTime":"2026-01-28T15:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.330971 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.331022 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.331031 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.331045 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.331053 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:37Z","lastTransitionTime":"2026-01-28T15:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.408532 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-45mlg_d1955ba7-b91c-41de-97b7-188922cc0907/kube-multus/0.log" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.408648 4871 generic.go:334] "Generic (PLEG): container finished" podID="d1955ba7-b91c-41de-97b7-188922cc0907" containerID="7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb" exitCode=1 Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.408700 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-45mlg" event={"ID":"d1955ba7-b91c-41de-97b7-188922cc0907","Type":"ContainerDied","Data":"7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb"} Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.409307 4871 scope.go:117] "RemoveContainer" containerID="7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.423709 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.434382 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.434438 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.434454 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.434475 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.434488 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:37Z","lastTransitionTime":"2026-01-28T15:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.439580 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.457417 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.480424 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.492704 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.504666 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa280ea6-1d32-4098-be1f-b7314f1a0576\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cfcf58f96119fd03a37cc47985518ce33dd48984718cc2968b6ebb43bd6913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd7df8770de3b5fe13ad0ac02c6a85f864af968e4ef04084b30f7f566736090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7t965\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.515522 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.527457 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:37Z\\\",\\\"message\\\":\\\"2026-01-28T15:17:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1db8ba10-58e2-4c2c-b351-ef1e6613bdd9\\\\n2026-01-28T15:17:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1db8ba10-58e2-4c2c-b351-ef1e6613bdd9 to /host/opt/cni/bin/\\\\n2026-01-28T15:17:52Z [verbose] multus-daemon started\\\\n2026-01-28T15:17:52Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:18:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.537337 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.537451 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.537521 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.537609 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.537680 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:37Z","lastTransitionTime":"2026-01-28T15:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.542702 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.557147 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.567080 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.576621 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jp46k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64aa044d-1eb6-4e5f-9c12-96ba346374fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jp46k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.588945 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.598423 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479d0f29-9dce-41a2-9b1a-75e157c26a03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a2644afc15f39effb1daa734bb53944ad9c7e05e72fbf8e8627879b5cc6c473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c31213dea27da9a677ea193b8780cc0e632e1263829731856bf682a3e1853e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd606211ea6e58c66c05dd5ecaf3a0c220b19bc7c82d1ea8d298ae82bd1b675e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a553dce0e11fd395cef308e3ec6756e7670462597b7eb183a4efbb1e0f638398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a553dce0e11fd395cef308e3ec6756e7670462597b7eb183a4efbb1e0f638398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.607528 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.618369 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.638442 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:14Z\\\",\\\"message\\\":\\\"rnalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0128 15:18:14.984464 6544 services_controller.go:444] Built service openshift-kube-scheduler-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 15:18:14.984474 6544 services_controller.go:445] Built service openshift-kube-scheduler-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0128 15:18:14.984453 6544 services_controller.go:451] Built service openshift-route-controller-manager/route-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 15:18:14.983398 6544 default_network_controller.go:776] Recording success ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fn5bb_openshift-ovn-kubernetes(178343c8-b657-4440-953e-6daef3609145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.641667 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.641704 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.641759 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.641775 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.641785 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:37Z","lastTransitionTime":"2026-01-28T15:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.649150 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9021abaa-911b-4eff-94d7-319546d60422\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1efc7e9400c3dfe89bbbdb0b78c108d7a5d013c433d5b62f53fe338f5d49b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ff221c97a59e61461b6518db7c1e3b30bf02f544c14f247fa74ceea7317d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0ff221c97a59e61461b6518db7c1e3b30bf02f544c14f247fa74ceea7317d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.660440 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.744293 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.744330 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.744340 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.744360 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.744372 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:37Z","lastTransitionTime":"2026-01-28T15:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.853810 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.853864 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.853874 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.853891 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.853902 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:37Z","lastTransitionTime":"2026-01-28T15:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.880151 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 03:16:43.300877417 +0000 UTC Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.903828 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.903910 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:37 crc kubenswrapper[4871]: E0128 15:18:37.903975 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:37 crc kubenswrapper[4871]: E0128 15:18:37.904146 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.956863 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.956899 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.956909 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.956928 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:37 crc kubenswrapper[4871]: I0128 15:18:37.956940 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:37Z","lastTransitionTime":"2026-01-28T15:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.059489 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.059549 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.059562 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.059582 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.059623 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:38Z","lastTransitionTime":"2026-01-28T15:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.162525 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.162558 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.162566 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.162579 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.162605 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:38Z","lastTransitionTime":"2026-01-28T15:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.265741 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.265810 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.265839 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.265868 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.265889 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:38Z","lastTransitionTime":"2026-01-28T15:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.369112 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.369178 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.369189 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.369209 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.369222 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:38Z","lastTransitionTime":"2026-01-28T15:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.413776 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-45mlg_d1955ba7-b91c-41de-97b7-188922cc0907/kube-multus/0.log" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.413838 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-45mlg" event={"ID":"d1955ba7-b91c-41de-97b7-188922cc0907","Type":"ContainerStarted","Data":"27c2298a7ba740a339e0cf8710c12bd89e613c3450bf9bbc1fdbf21d93e3da41"} Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.424189 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.438011 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c2298a7ba740a339e0cf8710c12bd89e613c3450bf9bbc1fdbf21d93e3da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:37Z\\\",\\\"message\\\":\\\"2026-01-28T15:17:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1db8ba10-58e2-4c2c-b351-ef1e6613bdd9\\\\n2026-01-28T15:17:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1db8ba10-58e2-4c2c-b351-ef1e6613bdd9 to /host/opt/cni/bin/\\\\n2026-01-28T15:17:52Z [verbose] multus-daemon started\\\\n2026-01-28T15:17:52Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:18:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.449461 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa280ea6-1d32-4098-be1f-b7314f1a0576\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cfcf58f96119fd03a37cc47985518ce33dd48984718cc2968b6ebb43bd6913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd7df8770de3b5fe13ad0ac02c6a85f864af968e4ef04084b30f7f566736090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7t965\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.459751 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jp46k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64aa044d-1eb6-4e5f-9c12-96ba346374fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jp46k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.473543 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.473795 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.473806 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.473819 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.473829 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:38Z","lastTransitionTime":"2026-01-28T15:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.476465 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.489817 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479d0f29-9dce-41a2-9b1a-75e157c26a03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a2644afc15f39effb1daa734bb53944ad9c7e05e72fbf8e8627879b5cc6c473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c31213dea27da9a677ea193b8780cc0e632e1263829731856bf682a3e1853e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd606211ea6e58c66c05dd5ecaf3a0c220b19bc7c82d1ea8d298ae82bd1b675e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a553dce0e11fd395cef308e3ec6756e7670462597b7eb183a4efbb1e0f638398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a553dce0e11fd395cef308e3ec6756e7670462597b7eb183a4efbb1e0f638398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.505910 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.522447 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.534663 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.545045 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9021abaa-911b-4eff-94d7-319546d60422\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1efc7e9400c3dfe89bbbdb0b78c108d7a5d013c433d5b62f53fe338f5d49b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ff221c97a59e61461b6518db7c1e3b30bf02f544c14f247fa74ceea7317d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0ff221c97a59e61461b6518db7c1e3b30bf02f544c14f247fa74ceea7317d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.556948 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.568759 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.576894 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.576933 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.576942 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.576957 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.576968 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:38Z","lastTransitionTime":"2026-01-28T15:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.579226 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.596465 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:14Z\\\",\\\"message\\\":\\\"rnalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0128 15:18:14.984464 6544 services_controller.go:444] Built service openshift-kube-scheduler-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 15:18:14.984474 6544 services_controller.go:445] Built service openshift-kube-scheduler-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0128 15:18:14.984453 6544 services_controller.go:451] Built service openshift-route-controller-manager/route-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 15:18:14.983398 6544 default_network_controller.go:776] Recording success ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fn5bb_openshift-ovn-kubernetes(178343c8-b657-4440-953e-6daef3609145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.616061 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.627686 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.639641 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.650886 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.665340 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.679559 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.679628 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.679640 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.679659 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.679676 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:38Z","lastTransitionTime":"2026-01-28T15:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.781825 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.781867 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.781876 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.781890 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.781899 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:38Z","lastTransitionTime":"2026-01-28T15:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.880846 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 16:27:20.103498695 +0000 UTC Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.884749 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.884790 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.884801 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.884815 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.884828 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:38Z","lastTransitionTime":"2026-01-28T15:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.903761 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.903790 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:38 crc kubenswrapper[4871]: E0128 15:18:38.903860 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:38 crc kubenswrapper[4871]: E0128 15:18:38.903955 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.917930 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.931255 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.954545 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:14Z\\\",\\\"message\\\":\\\"rnalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0128 15:18:14.984464 6544 services_controller.go:444] Built service openshift-kube-scheduler-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 15:18:14.984474 6544 services_controller.go:445] Built service openshift-kube-scheduler-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0128 15:18:14.984453 6544 services_controller.go:451] Built service openshift-route-controller-manager/route-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 15:18:14.983398 6544 default_network_controller.go:776] Recording success ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fn5bb_openshift-ovn-kubernetes(178343c8-b657-4440-953e-6daef3609145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.970616 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9021abaa-911b-4eff-94d7-319546d60422\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1efc7e9400c3dfe89bbbdb0b78c108d7a5d013c433d5b62f53fe338f5d49b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ff221c97a59e61461b6518db7c1e3b30bf02f544c14f247fa74ceea7317d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0ff221c97a59e61461b6518db7c1e3b30bf02f544c14f247fa74ceea7317d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.990828 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.991042 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.991161 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.991889 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.991911 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:38Z","lastTransitionTime":"2026-01-28T15:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:38 crc kubenswrapper[4871]: I0128 15:18:38.992973 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.013932 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.026295 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.040416 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.064903 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.082659 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.095149 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.095230 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.095248 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.095270 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.095290 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:39Z","lastTransitionTime":"2026-01-28T15:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.099233 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa280ea6-1d32-4098-be1f-b7314f1a0576\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cfcf58f96119fd03a37cc47985518ce33dd48984718cc2968b6ebb43bd6913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd7df8770de3b5fe13ad0ac02c6a85f864af968e4ef04084b30f7f566736090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7t965\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.111100 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.122699 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c2298a7ba740a339e0cf8710c12bd89e613c3450bf9bbc1fdbf21d93e3da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:37Z\\\",\\\"message\\\":\\\"2026-01-28T15:17:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1db8ba10-58e2-4c2c-b351-ef1e6613bdd9\\\\n2026-01-28T15:17:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1db8ba10-58e2-4c2c-b351-ef1e6613bdd9 to /host/opt/cni/bin/\\\\n2026-01-28T15:17:52Z [verbose] multus-daemon started\\\\n2026-01-28T15:17:52Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:18:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.135989 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.149725 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.158368 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.165955 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jp46k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64aa044d-1eb6-4e5f-9c12-96ba346374fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jp46k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.175965 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.185864 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479d0f29-9dce-41a2-9b1a-75e157c26a03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a2644afc15f39effb1daa734bb53944ad9c7e05e72fbf8e8627879b5cc6c473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c31213dea27da9a677ea193b8780cc0e632e1263829731856bf682a3e1853e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd606211ea6e58c66c05dd5ecaf3a0c220b19bc7c82d1ea8d298ae82bd1b675e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a553dce0e11fd395cef308e3ec6756e7670462597b7eb183a4efbb1e0f638398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a553dce0e11fd395cef308e3ec6756e7670462597b7eb183a4efbb1e0f638398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.197806 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.197844 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.197853 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.197868 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.197878 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:39Z","lastTransitionTime":"2026-01-28T15:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.299349 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.299383 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.299392 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.299405 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.299413 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:39Z","lastTransitionTime":"2026-01-28T15:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.402146 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.402188 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.402201 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.402217 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.402229 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:39Z","lastTransitionTime":"2026-01-28T15:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.504358 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.504399 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.504410 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.504425 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.504436 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:39Z","lastTransitionTime":"2026-01-28T15:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.606406 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.606447 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.606459 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.606474 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.606485 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:39Z","lastTransitionTime":"2026-01-28T15:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.709067 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.709122 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.709142 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.709165 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.709182 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:39Z","lastTransitionTime":"2026-01-28T15:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.811719 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.811793 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.811806 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.811825 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.811836 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:39Z","lastTransitionTime":"2026-01-28T15:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.881019 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:31:50.632656368 +0000 UTC Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.905540 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.906377 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:39 crc kubenswrapper[4871]: E0128 15:18:39.906559 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:39 crc kubenswrapper[4871]: E0128 15:18:39.906690 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.914647 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.914725 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.914735 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.914754 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:39 crc kubenswrapper[4871]: I0128 15:18:39.914770 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:39Z","lastTransitionTime":"2026-01-28T15:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.017188 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.017292 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.017317 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.017345 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.017369 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:40Z","lastTransitionTime":"2026-01-28T15:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.120208 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.120271 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.120298 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.120326 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.120348 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:40Z","lastTransitionTime":"2026-01-28T15:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.223366 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.223399 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.223411 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.223426 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.223440 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:40Z","lastTransitionTime":"2026-01-28T15:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.327839 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.328312 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.328697 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.329026 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.329248 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:40Z","lastTransitionTime":"2026-01-28T15:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.432194 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.432515 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.432628 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.432743 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.432838 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:40Z","lastTransitionTime":"2026-01-28T15:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.535930 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.536031 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.536043 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.536070 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.536089 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:40Z","lastTransitionTime":"2026-01-28T15:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.638609 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.638660 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.638671 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.638689 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.638700 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:40Z","lastTransitionTime":"2026-01-28T15:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.740971 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.741011 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.741021 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.741035 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.741044 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:40Z","lastTransitionTime":"2026-01-28T15:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.843259 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.843302 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.843312 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.843328 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.843343 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:40Z","lastTransitionTime":"2026-01-28T15:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.882009 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 07:36:51.571954931 +0000 UTC Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.903612 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:40 crc kubenswrapper[4871]: E0128 15:18:40.903719 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.903613 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:40 crc kubenswrapper[4871]: E0128 15:18:40.903837 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.946191 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.946486 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.946604 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.946697 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:40 crc kubenswrapper[4871]: I0128 15:18:40.946795 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:40Z","lastTransitionTime":"2026-01-28T15:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.049258 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.049327 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.049340 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.049360 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.049375 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:41Z","lastTransitionTime":"2026-01-28T15:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.152397 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.152463 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.152480 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.152510 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.152533 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:41Z","lastTransitionTime":"2026-01-28T15:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.255543 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.255683 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.255711 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.255738 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.255756 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:41Z","lastTransitionTime":"2026-01-28T15:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.358899 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.358988 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.359016 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.359046 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.359068 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:41Z","lastTransitionTime":"2026-01-28T15:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.461422 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.461481 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.461499 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.461526 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.461544 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:41Z","lastTransitionTime":"2026-01-28T15:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.564985 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.565393 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.565546 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.565662 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.565749 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:41Z","lastTransitionTime":"2026-01-28T15:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.668304 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.668361 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.668379 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.668402 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.668420 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:41Z","lastTransitionTime":"2026-01-28T15:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.770444 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.770476 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.770487 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.770503 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.770514 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:41Z","lastTransitionTime":"2026-01-28T15:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.875225 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.875264 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.875273 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.875286 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.875296 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:41Z","lastTransitionTime":"2026-01-28T15:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.882934 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 22:42:47.886684117 +0000 UTC Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.903284 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.903371 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:41 crc kubenswrapper[4871]: E0128 15:18:41.903418 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:41 crc kubenswrapper[4871]: E0128 15:18:41.903560 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.978419 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.978471 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.978482 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.978501 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:41 crc kubenswrapper[4871]: I0128 15:18:41.978514 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:41Z","lastTransitionTime":"2026-01-28T15:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.081154 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.081200 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.081213 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.081233 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.081246 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:42Z","lastTransitionTime":"2026-01-28T15:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.184191 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.184249 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.184263 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.184286 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.184302 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:42Z","lastTransitionTime":"2026-01-28T15:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.287235 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.287272 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.287280 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.287298 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.287307 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:42Z","lastTransitionTime":"2026-01-28T15:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.390284 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.390331 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.390340 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.390354 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.390370 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:42Z","lastTransitionTime":"2026-01-28T15:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.492985 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.493041 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.493054 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.493074 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.493090 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:42Z","lastTransitionTime":"2026-01-28T15:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.596225 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.596289 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.596305 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.596330 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.596348 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:42Z","lastTransitionTime":"2026-01-28T15:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.699872 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.699942 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.699965 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.699995 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.700016 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:42Z","lastTransitionTime":"2026-01-28T15:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.803137 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.803177 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.803186 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.803199 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.803209 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:42Z","lastTransitionTime":"2026-01-28T15:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.925886 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 16:52:44.274666707 +0000 UTC Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.927101 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:42 crc kubenswrapper[4871]: E0128 15:18:42.927337 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.927649 4871 scope.go:117] "RemoveContainer" containerID="263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.927843 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:42 crc kubenswrapper[4871]: E0128 15:18:42.928015 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.928283 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.928319 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.928329 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.928344 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:42 crc kubenswrapper[4871]: I0128 15:18:42.928353 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:42Z","lastTransitionTime":"2026-01-28T15:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.030660 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.030922 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.030931 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.030944 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.030953 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:43Z","lastTransitionTime":"2026-01-28T15:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.133498 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.133535 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.133546 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.133563 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.133574 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:43Z","lastTransitionTime":"2026-01-28T15:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.235786 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.235838 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.235849 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.235866 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.235878 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:43Z","lastTransitionTime":"2026-01-28T15:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.338324 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.338366 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.338376 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.338390 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.338400 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:43Z","lastTransitionTime":"2026-01-28T15:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.432331 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovnkube-controller/2.log" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.436631 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerStarted","Data":"7d0c2a2b38d9ad0ac6e4219b2ad68723992f3980697ca73508a0956647e966da"} Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.437132 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.440471 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.440518 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.440532 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.440554 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.440568 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:43Z","lastTransitionTime":"2026-01-28T15:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.471840 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.485072 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.500028 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.512217 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.523720 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.533842 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.543047 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.543085 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.543095 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.543113 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.543122 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:43Z","lastTransitionTime":"2026-01-28T15:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.549075 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c2298a7ba740a339e0cf8710c12bd89e613c3450bf9bbc1fdbf21d93e3da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:37Z\\\",\\\"message\\\":\\\"2026-01-28T15:17:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1db8ba10-58e2-4c2c-b351-ef1e6613bdd9\\\\n2026-01-28T15:17:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1db8ba10-58e2-4c2c-b351-ef1e6613bdd9 to /host/opt/cni/bin/\\\\n2026-01-28T15:17:52Z [verbose] multus-daemon started\\\\n2026-01-28T15:17:52Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:18:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.564209 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa280ea6-1d32-4098-be1f-b7314f1a0576\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cfcf58f96119fd03a37cc47985518ce33dd48984718cc2968b6ebb43bd6913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd7df8770de3b5fe13ad0ac02c6a85f864af968e4ef04084b30f7f566736090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7t965\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.581515 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.595715 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479d0f29-9dce-41a2-9b1a-75e157c26a03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a2644afc15f39effb1daa734bb53944ad9c7e05e72fbf8e8627879b5cc6c473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c31213dea27da9a677ea193b8780cc0e632e1263829731856bf682a3e1853e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd606211ea6e58c66c05dd5ecaf3a0c220b19bc7c82d1ea8d298ae82bd1b675e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a553dce0e11fd395cef308e3ec6756e7670462597b7eb183a4efbb1e0f638398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a553dce0e11fd395cef308e3ec6756e7670462597b7eb183a4efbb1e0f638398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.609323 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.629563 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.646334 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.646376 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.646389 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.646406 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.646417 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:43Z","lastTransitionTime":"2026-01-28T15:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.648089 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.672441 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jp46k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64aa044d-1eb6-4e5f-9c12-96ba346374fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jp46k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.684794 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9021abaa-911b-4eff-94d7-319546d60422\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1efc7e9400c3dfe89bbbdb0b78c108d7a5d013c433d5b62f53fe338f5d49b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ff221c97a59e61461b6518db7c1e3b30bf02f544c14f247fa74ceea7317d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0ff221c97a59e61461b6518db7c1e3b30bf02f544c14f247fa74ceea7317d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.701197 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.715056 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.724419 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.746644 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0c2a2b38d9ad0ac6e4219b2ad68723992f3980697ca73508a0956647e966da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:14Z\\\",\\\"message\\\":\\\"rnalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0128 15:18:14.984464 6544 services_controller.go:444] Built service openshift-kube-scheduler-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 15:18:14.984474 6544 services_controller.go:445] Built service openshift-kube-scheduler-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0128 15:18:14.984453 6544 services_controller.go:451] Built service openshift-route-controller-manager/route-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 15:18:14.983398 6544 default_network_controller.go:776] Recording success ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.748353 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.748403 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.748413 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.748426 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.748436 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:43Z","lastTransitionTime":"2026-01-28T15:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.851023 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.851073 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.851090 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.851109 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.851123 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:43Z","lastTransitionTime":"2026-01-28T15:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.903312 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:43 crc kubenswrapper[4871]: E0128 15:18:43.903448 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.903667 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:43 crc kubenswrapper[4871]: E0128 15:18:43.903730 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.926027 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 05:02:37.703403207 +0000 UTC Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.954173 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.954310 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.954330 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.954357 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:43 crc kubenswrapper[4871]: I0128 15:18:43.954376 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:43Z","lastTransitionTime":"2026-01-28T15:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.057649 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.057718 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.057736 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.057763 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.057782 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:44Z","lastTransitionTime":"2026-01-28T15:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.160767 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.160849 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.160875 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.160901 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.160920 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:44Z","lastTransitionTime":"2026-01-28T15:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.263537 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.263653 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.263678 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.263705 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.263727 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:44Z","lastTransitionTime":"2026-01-28T15:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.367445 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.367529 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.367564 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.367628 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.367658 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:44Z","lastTransitionTime":"2026-01-28T15:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.443066 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovnkube-controller/3.log" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.444008 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovnkube-controller/2.log" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.447172 4871 generic.go:334] "Generic (PLEG): container finished" podID="178343c8-b657-4440-953e-6daef3609145" containerID="7d0c2a2b38d9ad0ac6e4219b2ad68723992f3980697ca73508a0956647e966da" exitCode=1 Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.447227 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerDied","Data":"7d0c2a2b38d9ad0ac6e4219b2ad68723992f3980697ca73508a0956647e966da"} Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.447278 4871 scope.go:117] "RemoveContainer" containerID="263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.448107 4871 scope.go:117] "RemoveContainer" containerID="7d0c2a2b38d9ad0ac6e4219b2ad68723992f3980697ca73508a0956647e966da" Jan 28 15:18:44 crc kubenswrapper[4871]: E0128 15:18:44.448332 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fn5bb_openshift-ovn-kubernetes(178343c8-b657-4440-953e-6daef3609145)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" podUID="178343c8-b657-4440-953e-6daef3609145" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.462051 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.470617 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.470665 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.470679 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.470699 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.470713 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:44Z","lastTransitionTime":"2026-01-28T15:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.475912 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jp46k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64aa044d-1eb6-4e5f-9c12-96ba346374fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jp46k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.491876 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.505318 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479d0f29-9dce-41a2-9b1a-75e157c26a03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a2644afc15f39effb1daa734bb53944ad9c7e05e72fbf8e8627879b5cc6c473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c31213dea27da9a677ea193b8780cc0e632e1263829731856bf682a3e1853e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd606211ea6e58c66c05dd5ecaf3a0c220b19bc7c82d1ea8d298ae82bd1b675e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a553dce0e11fd395cef308e3ec6756e7670462597b7eb183a4efbb1e0f638398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a553dce0e11fd395cef308e3ec6756e7670462597b7eb183a4efbb1e0f638398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.521523 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.535718 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.558751 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0c2a2b38d9ad0ac6e4219b2ad68723992f3980697ca73508a0956647e966da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263c3f57c56b472af33f47f7ca9ce05bff926d6e9bab15f051f29c4dc5d68dff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:14Z\\\",\\\"message\\\":\\\"rnalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0128 15:18:14.984464 6544 services_controller.go:444] Built service openshift-kube-scheduler-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 15:18:14.984474 6544 services_controller.go:445] Built service openshift-kube-scheduler-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0128 15:18:14.984453 6544 services_controller.go:451] Built service openshift-route-controller-manager/route-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 15:18:14.983398 6544 default_network_controller.go:776] Recording success ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d0c2a2b38d9ad0ac6e4219b2ad68723992f3980697ca73508a0956647e966da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:43Z\\\",\\\"message\\\":\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0128 15:18:43.861070 6948 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:43Z is after 2025-08-24T17:21:41Z]\\\\nI0128 15:18:43.861096 6948 model_cli\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.569638 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9021abaa-911b-4eff-94d7-319546d60422\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1efc7e9400c3dfe89bbbdb0b78c108d7a5d013c433d5b62f53fe338f5d49b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ff221c97a59e61461b6518db7c1e3b30bf02f544c14f247fa74ceea7317d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0ff221c97a59e61461b6518db7c1e3b30bf02f544c14f247fa74ceea7317d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.573691 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.573754 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.573768 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.573790 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.573802 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:44Z","lastTransitionTime":"2026-01-28T15:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.583957 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.597165 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.606008 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.618338 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.636181 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.651512 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.664991 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.676112 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.676155 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.676174 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.676196 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.676213 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:44Z","lastTransitionTime":"2026-01-28T15:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.677404 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.688361 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.701211 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c2298a7ba740a339e0cf8710c12bd89e613c3450bf9bbc1fdbf21d93e3da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:37Z\\\",\\\"message\\\":\\\"2026-01-28T15:17:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1db8ba10-58e2-4c2c-b351-ef1e6613bdd9\\\\n2026-01-28T15:17:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1db8ba10-58e2-4c2c-b351-ef1e6613bdd9 to /host/opt/cni/bin/\\\\n2026-01-28T15:17:52Z [verbose] multus-daemon started\\\\n2026-01-28T15:17:52Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:18:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.712689 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa280ea6-1d32-4098-be1f-b7314f1a0576\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cfcf58f96119fd03a37cc47985518ce33dd48984718cc2968b6ebb43bd6913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd7df8770de3b5fe13ad0ac02c6a85f864af968e4ef04084b30f7f566736090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7t965\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.779741 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.779803 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.779821 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.779843 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.779859 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:44Z","lastTransitionTime":"2026-01-28T15:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.883204 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.883256 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.883266 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.883282 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.883292 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:44Z","lastTransitionTime":"2026-01-28T15:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.903520 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.903520 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:44 crc kubenswrapper[4871]: E0128 15:18:44.903678 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:44 crc kubenswrapper[4871]: E0128 15:18:44.903919 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.926320 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 14:13:19.79150437 +0000 UTC Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.986578 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.986694 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.986794 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.986832 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:44 crc kubenswrapper[4871]: I0128 15:18:44.986854 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:44Z","lastTransitionTime":"2026-01-28T15:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.089268 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.089299 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.089307 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.089320 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.089328 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:45Z","lastTransitionTime":"2026-01-28T15:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.192675 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.192783 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.192809 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.192837 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.192858 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:45Z","lastTransitionTime":"2026-01-28T15:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.295902 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.295970 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.295987 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.296010 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.296027 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:45Z","lastTransitionTime":"2026-01-28T15:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.399736 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.399814 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.399836 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.399864 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.399884 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:45Z","lastTransitionTime":"2026-01-28T15:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.469798 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovnkube-controller/3.log" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.474376 4871 scope.go:117] "RemoveContainer" containerID="7d0c2a2b38d9ad0ac6e4219b2ad68723992f3980697ca73508a0956647e966da" Jan 28 15:18:45 crc kubenswrapper[4871]: E0128 15:18:45.474639 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fn5bb_openshift-ovn-kubernetes(178343c8-b657-4440-953e-6daef3609145)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" podUID="178343c8-b657-4440-953e-6daef3609145" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.507873 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.509667 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.509706 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.509720 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.509741 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.509756 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:45Z","lastTransitionTime":"2026-01-28T15:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.540119 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.568250 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.583376 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.598207 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.612405 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.612450 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.612461 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.612477 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.612490 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:45Z","lastTransitionTime":"2026-01-28T15:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.612681 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.627331 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c2298a7ba740a339e0cf8710c12bd89e613c3450bf9bbc1fdbf21d93e3da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:37Z\\\",\\\"message\\\":\\\"2026-01-28T15:17:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1db8ba10-58e2-4c2c-b351-ef1e6613bdd9\\\\n2026-01-28T15:17:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1db8ba10-58e2-4c2c-b351-ef1e6613bdd9 to /host/opt/cni/bin/\\\\n2026-01-28T15:17:52Z [verbose] multus-daemon started\\\\n2026-01-28T15:17:52Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:18:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.641272 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa280ea6-1d32-4098-be1f-b7314f1a0576\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cfcf58f96119fd03a37cc47985518ce33dd48984718cc2968b6ebb43bd6913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd7df8770de3b5fe13ad0ac02c6a85f864af968e4ef04084b30f7f566736090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7t965\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.657727 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.669208 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.682263 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jp46k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64aa044d-1eb6-4e5f-9c12-96ba346374fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jp46k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.697764 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.713871 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479d0f29-9dce-41a2-9b1a-75e157c26a03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a2644afc15f39effb1daa734bb53944ad9c7e05e72fbf8e8627879b5cc6c473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c31213dea27da9a677ea193b8780cc0e632e1263829731856bf682a3e1853e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd606211ea6e58c66c05dd5ecaf3a0c220b19bc7c82d1ea8d298ae82bd1b675e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a553dce0e11fd395cef308e3ec6756e7670462597b7eb183a4efbb1e0f638398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a553dce0e11fd395cef308e3ec6756e7670462597b7eb183a4efbb1e0f638398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.715690 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.715746 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.715829 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.716079 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.716098 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:45Z","lastTransitionTime":"2026-01-28T15:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.732862 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.746612 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.766428 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0c2a2b38d9ad0ac6e4219b2ad68723992f3980697ca73508a0956647e966da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d0c2a2b38d9ad0ac6e4219b2ad68723992f3980697ca73508a0956647e966da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:43Z\\\",\\\"message\\\":\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0128 15:18:43.861070 6948 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:43Z is after 2025-08-24T17:21:41Z]\\\\nI0128 15:18:43.861096 6948 model_cli\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fn5bb_openshift-ovn-kubernetes(178343c8-b657-4440-953e-6daef3609145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.777468 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9021abaa-911b-4eff-94d7-319546d60422\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1efc7e9400c3dfe89bbbdb0b78c108d7a5d013c433d5b62f53fe338f5d49b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ff221c97a59e61461b6518db7c1e3b30bf02f544c14f247fa74ceea7317d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0ff221c97a59e61461b6518db7c1e3b30bf02f544c14f247fa74ceea7317d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.792079 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.807954 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.818755 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.818831 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.818843 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.818864 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.818877 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:45Z","lastTransitionTime":"2026-01-28T15:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.833496 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.833573 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.833602 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.833624 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.833641 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:45Z","lastTransitionTime":"2026-01-28T15:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:45 crc kubenswrapper[4871]: E0128 15:18:45.847519 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.851660 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.851704 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.851718 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.851738 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.851750 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:45Z","lastTransitionTime":"2026-01-28T15:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:45 crc kubenswrapper[4871]: E0128 15:18:45.865902 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.869711 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.869748 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.869759 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.869777 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.869789 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:45Z","lastTransitionTime":"2026-01-28T15:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:45 crc kubenswrapper[4871]: E0128 15:18:45.884839 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.887800 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.887852 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.887866 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.887885 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.887916 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:45Z","lastTransitionTime":"2026-01-28T15:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:45 crc kubenswrapper[4871]: E0128 15:18:45.900624 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.903245 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.903328 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:45 crc kubenswrapper[4871]: E0128 15:18:45.903435 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:45 crc kubenswrapper[4871]: E0128 15:18:45.903759 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.908099 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.908135 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.908145 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.908160 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.908172 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:45Z","lastTransitionTime":"2026-01-28T15:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:45 crc kubenswrapper[4871]: E0128 15:18:45.922215 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:45 crc kubenswrapper[4871]: E0128 15:18:45.922376 4871 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.923959 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.923986 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.923995 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.924009 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.924019 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:45Z","lastTransitionTime":"2026-01-28T15:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:45 crc kubenswrapper[4871]: I0128 15:18:45.926631 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 19:40:58.990962287 +0000 UTC Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.026349 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.026387 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.026395 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.026409 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.026419 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:46Z","lastTransitionTime":"2026-01-28T15:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.129628 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.129685 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.129702 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.129725 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.129743 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:46Z","lastTransitionTime":"2026-01-28T15:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.232957 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.233005 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.233016 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.233039 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.233050 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:46Z","lastTransitionTime":"2026-01-28T15:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.336026 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.336076 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.336085 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.336100 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.336109 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:46Z","lastTransitionTime":"2026-01-28T15:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.439127 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.439195 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.439220 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.439251 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.439278 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:46Z","lastTransitionTime":"2026-01-28T15:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.542519 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.542629 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.542649 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.542678 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.542709 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:46Z","lastTransitionTime":"2026-01-28T15:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.645290 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.645324 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.645334 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.645347 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.645355 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:46Z","lastTransitionTime":"2026-01-28T15:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.747928 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.747968 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.747977 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.747991 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.748002 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:46Z","lastTransitionTime":"2026-01-28T15:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.850731 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.850771 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.850782 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.850796 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.850805 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:46Z","lastTransitionTime":"2026-01-28T15:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.903928 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.903956 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:46 crc kubenswrapper[4871]: E0128 15:18:46.904106 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:46 crc kubenswrapper[4871]: E0128 15:18:46.904385 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.927524 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 23:45:49.712753334 +0000 UTC Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.953297 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.953346 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.953357 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.953377 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:46 crc kubenswrapper[4871]: I0128 15:18:46.953389 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:46Z","lastTransitionTime":"2026-01-28T15:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.056121 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.056168 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.056180 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.056195 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.056207 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:47Z","lastTransitionTime":"2026-01-28T15:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.159073 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.159120 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.159132 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.159151 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.159163 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:47Z","lastTransitionTime":"2026-01-28T15:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.261928 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.262006 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.262029 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.262063 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.262086 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:47Z","lastTransitionTime":"2026-01-28T15:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.364803 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.364858 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.364870 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.364886 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.364900 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:47Z","lastTransitionTime":"2026-01-28T15:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.467459 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.467514 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.467526 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.467544 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.467555 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:47Z","lastTransitionTime":"2026-01-28T15:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.570201 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.570251 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.570264 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.570284 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.570293 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:47Z","lastTransitionTime":"2026-01-28T15:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.673207 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.673252 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.673265 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.673287 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.673301 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:47Z","lastTransitionTime":"2026-01-28T15:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.776203 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.776260 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.776276 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.776297 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.776315 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:47Z","lastTransitionTime":"2026-01-28T15:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.878611 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.878639 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.878647 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.878660 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.878668 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:47Z","lastTransitionTime":"2026-01-28T15:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.903814 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:47 crc kubenswrapper[4871]: E0128 15:18:47.903910 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.904046 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:47 crc kubenswrapper[4871]: E0128 15:18:47.904093 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.928378 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 21:10:57.580117167 +0000 UTC Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.980958 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.981020 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.981034 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.981059 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:47 crc kubenswrapper[4871]: I0128 15:18:47.981071 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:47Z","lastTransitionTime":"2026-01-28T15:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.084190 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.084235 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.084248 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.084264 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.084278 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:48Z","lastTransitionTime":"2026-01-28T15:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.187155 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.187208 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.187219 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.187237 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.187249 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:48Z","lastTransitionTime":"2026-01-28T15:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.291156 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.291207 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.291225 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.291242 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.291253 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:48Z","lastTransitionTime":"2026-01-28T15:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.396444 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.396481 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.396490 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.396505 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.396513 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:48Z","lastTransitionTime":"2026-01-28T15:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.498827 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.498870 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.498879 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.498892 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.498902 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:48Z","lastTransitionTime":"2026-01-28T15:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.600953 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.600990 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.601002 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.601016 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.601026 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:48Z","lastTransitionTime":"2026-01-28T15:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.703355 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.703418 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.703429 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.703443 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.703453 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:48Z","lastTransitionTime":"2026-01-28T15:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.806181 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.806212 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.806221 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.806234 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.806245 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:48Z","lastTransitionTime":"2026-01-28T15:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.903810 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.903841 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:48 crc kubenswrapper[4871]: E0128 15:18:48.904106 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:48 crc kubenswrapper[4871]: E0128 15:18:48.904303 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.919958 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.920005 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.920015 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.920030 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.920040 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:48Z","lastTransitionTime":"2026-01-28T15:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.929190 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 10:52:45.335586105 +0000 UTC Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.929312 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.949916 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.967053 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:48 crc kubenswrapper[4871]: I0128 15:18:48.991047 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0c2a2b38d9ad0ac6e4219b2ad68723992f3980697ca73508a0956647e966da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d0c2a2b38d9ad0ac6e4219b2ad68723992f3980697ca73508a0956647e966da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:43Z\\\",\\\"message\\\":\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0128 15:18:43.861070 6948 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:43Z is after 2025-08-24T17:21:41Z]\\\\nI0128 15:18:43.861096 6948 model_cli\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fn5bb_openshift-ovn-kubernetes(178343c8-b657-4440-953e-6daef3609145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.006248 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9021abaa-911b-4eff-94d7-319546d60422\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1efc7e9400c3dfe89bbbdb0b78c108d7a5d013c433d5b62f53fe338f5d49b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ff221c97a59e61461b6518db7c1e3b30bf02f544c14f247fa74ceea7317d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0ff221c97a59e61461b6518db7c1e3b30bf02f544c14f247fa74ceea7317d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.022129 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.023714 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.023771 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.023786 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.023804 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.023817 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:49Z","lastTransitionTime":"2026-01-28T15:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.040142 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.053394 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.069389 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.089362 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.105862 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c2298a7ba740a339e0cf8710c12bd89e613c3450bf9bbc1fdbf21d93e3da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:37Z\\\",\\\"message\\\":\\\"2026-01-28T15:17:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1db8ba10-58e2-4c2c-b351-ef1e6613bdd9\\\\n2026-01-28T15:17:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1db8ba10-58e2-4c2c-b351-ef1e6613bdd9 to /host/opt/cni/bin/\\\\n2026-01-28T15:17:52Z [verbose] multus-daemon started\\\\n2026-01-28T15:17:52Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:18:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.117640 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa280ea6-1d32-4098-be1f-b7314f1a0576\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cfcf58f96119fd03a37cc47985518ce33dd48984718cc2968b6ebb43bd6913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd7df8770de3b5fe13ad0ac02c6a85f864af968e4ef04084b30f7f566736090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7t965\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.130298 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.130370 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.130384 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.130406 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.130420 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:49Z","lastTransitionTime":"2026-01-28T15:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.134518 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.153202 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479d0f29-9dce-41a2-9b1a-75e157c26a03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a2644afc15f39effb1daa734bb53944ad9c7e05e72fbf8e8627879b5cc6c473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c31213dea27da9a677ea193b8780cc0e632e1263829731856bf682a3e1853e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd606211ea6e58c66c05dd5ecaf3a0c220b19bc7c82d1ea8d298ae82bd1b675e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a553dce0e11fd395cef308e3ec6756e7670462597b7eb183a4efbb1e0f638398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a553dce0e11fd395cef308e3ec6756e7670462597b7eb183a4efbb1e0f638398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.169512 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.186783 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.199943 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.211202 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jp46k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64aa044d-1eb6-4e5f-9c12-96ba346374fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jp46k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.232695 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.232747 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.232759 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.232786 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.232800 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:49Z","lastTransitionTime":"2026-01-28T15:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.239287 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.336594 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.336646 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.336657 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.336677 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.336690 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:49Z","lastTransitionTime":"2026-01-28T15:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.439025 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.439071 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.439081 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.439106 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.439118 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:49Z","lastTransitionTime":"2026-01-28T15:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.541757 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.541857 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.541881 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.541912 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.541935 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:49Z","lastTransitionTime":"2026-01-28T15:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.644663 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.644723 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.644732 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.644749 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.644759 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:49Z","lastTransitionTime":"2026-01-28T15:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.748621 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.748698 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.748725 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.748759 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.748792 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:49Z","lastTransitionTime":"2026-01-28T15:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.851860 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.851947 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.851974 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.852002 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.852020 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:49Z","lastTransitionTime":"2026-01-28T15:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.903773 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.903819 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:49 crc kubenswrapper[4871]: E0128 15:18:49.903991 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:49 crc kubenswrapper[4871]: E0128 15:18:49.904084 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.930335 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 15:29:42.881879242 +0000 UTC Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.955312 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.955371 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.955392 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.955416 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:49 crc kubenswrapper[4871]: I0128 15:18:49.955436 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:49Z","lastTransitionTime":"2026-01-28T15:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.059046 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.059176 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.059198 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.059221 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.059239 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:50Z","lastTransitionTime":"2026-01-28T15:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.162503 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.162571 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.162640 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.162686 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.162729 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:50Z","lastTransitionTime":"2026-01-28T15:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.265668 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.265713 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.265727 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.265743 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.265755 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:50Z","lastTransitionTime":"2026-01-28T15:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.369325 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.369382 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.369396 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.369417 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.369430 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:50Z","lastTransitionTime":"2026-01-28T15:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.473054 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.473108 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.473118 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.473142 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.473157 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:50Z","lastTransitionTime":"2026-01-28T15:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.576361 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.576416 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.576431 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.576454 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.576467 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:50Z","lastTransitionTime":"2026-01-28T15:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.678978 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.679028 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.679039 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.679057 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.679068 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:50Z","lastTransitionTime":"2026-01-28T15:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.781756 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.781794 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.781802 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.781820 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.781832 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:50Z","lastTransitionTime":"2026-01-28T15:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.885199 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.885299 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.885334 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.885393 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.885418 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:50Z","lastTransitionTime":"2026-01-28T15:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.903054 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.903161 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:50 crc kubenswrapper[4871]: E0128 15:18:50.903249 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:50 crc kubenswrapper[4871]: E0128 15:18:50.903362 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.930620 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 19:50:00.491408425 +0000 UTC Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.989541 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.989666 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.989690 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.989715 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:50 crc kubenswrapper[4871]: I0128 15:18:50.989733 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:50Z","lastTransitionTime":"2026-01-28T15:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.093528 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.093668 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.093693 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.093723 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.093740 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:51Z","lastTransitionTime":"2026-01-28T15:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.197020 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.197088 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.197106 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.197129 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.197146 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:51Z","lastTransitionTime":"2026-01-28T15:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.299845 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.299889 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.299901 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.299918 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.299929 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:51Z","lastTransitionTime":"2026-01-28T15:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.402645 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.402718 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.402737 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.402762 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.402781 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:51Z","lastTransitionTime":"2026-01-28T15:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.505571 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.505672 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.505701 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.505729 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.505750 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:51Z","lastTransitionTime":"2026-01-28T15:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.609233 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.609316 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.609336 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.609364 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.609387 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:51Z","lastTransitionTime":"2026-01-28T15:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.713124 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.713224 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.713265 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.713299 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.713325 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:51Z","lastTransitionTime":"2026-01-28T15:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.816908 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.816990 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.817015 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.817045 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.817067 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:51Z","lastTransitionTime":"2026-01-28T15:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.903713 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:51 crc kubenswrapper[4871]: E0128 15:18:51.903859 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.904047 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:51 crc kubenswrapper[4871]: E0128 15:18:51.904108 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.920484 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.920542 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.920554 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.920578 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.920611 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:51Z","lastTransitionTime":"2026-01-28T15:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:51 crc kubenswrapper[4871]: I0128 15:18:51.931642 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 03:31:34.358717795 +0000 UTC Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.024059 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.024124 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.024140 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.024164 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.024182 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:52Z","lastTransitionTime":"2026-01-28T15:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.127727 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.127806 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.127829 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.127949 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.127970 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:52Z","lastTransitionTime":"2026-01-28T15:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.230261 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.230304 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.230315 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.230330 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.230339 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:52Z","lastTransitionTime":"2026-01-28T15:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.238539 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.238614 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.238652 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:52 crc kubenswrapper[4871]: E0128 15:18:52.238695 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:56.238676981 +0000 UTC m=+148.134515303 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:18:52 crc kubenswrapper[4871]: E0128 15:18:52.238743 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:18:52 crc kubenswrapper[4871]: E0128 15:18:52.238755 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:18:52 crc kubenswrapper[4871]: E0128 15:18:52.238758 4871 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.238781 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:52 crc kubenswrapper[4871]: E0128 15:18:52.238804 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:56.238792745 +0000 UTC m=+148.134631067 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:18:52 crc kubenswrapper[4871]: E0128 15:18:52.238766 4871 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:18:52 crc kubenswrapper[4871]: E0128 15:18:52.238869 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:56.238856887 +0000 UTC m=+148.134695209 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.238825 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:52 crc kubenswrapper[4871]: E0128 15:18:52.238885 4871 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:18:52 crc kubenswrapper[4871]: E0128 15:18:52.238916 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:56.238907488 +0000 UTC m=+148.134745810 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:18:52 crc kubenswrapper[4871]: E0128 15:18:52.239158 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:18:52 crc kubenswrapper[4871]: E0128 15:18:52.239176 4871 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:18:52 crc kubenswrapper[4871]: E0128 15:18:52.239184 4871 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:18:52 crc kubenswrapper[4871]: E0128 15:18:52.239221 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:56.239213068 +0000 UTC m=+148.135051390 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.332764 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.332809 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.332818 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.332832 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.332846 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:52Z","lastTransitionTime":"2026-01-28T15:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.436833 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.436934 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.436955 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.436980 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.437048 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:52Z","lastTransitionTime":"2026-01-28T15:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.539495 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.539527 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.539538 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.539550 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.539559 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:52Z","lastTransitionTime":"2026-01-28T15:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.642164 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.642246 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.642272 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.642302 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.642331 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:52Z","lastTransitionTime":"2026-01-28T15:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.745943 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.746010 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.746032 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.746062 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.746081 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:52Z","lastTransitionTime":"2026-01-28T15:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.849691 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.849751 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.849773 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.849802 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.849833 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:52Z","lastTransitionTime":"2026-01-28T15:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.903887 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.903996 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:52 crc kubenswrapper[4871]: E0128 15:18:52.904156 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:52 crc kubenswrapper[4871]: E0128 15:18:52.904319 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.932776 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 01:07:47.937241191 +0000 UTC Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.953058 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.953124 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.953150 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.953179 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:52 crc kubenswrapper[4871]: I0128 15:18:52.953203 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:52Z","lastTransitionTime":"2026-01-28T15:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.056364 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.056422 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.056441 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.056466 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.056488 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:53Z","lastTransitionTime":"2026-01-28T15:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.159962 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.160015 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.160028 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.160048 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.160063 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:53Z","lastTransitionTime":"2026-01-28T15:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.264665 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.265210 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.265225 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.265252 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.265270 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:53Z","lastTransitionTime":"2026-01-28T15:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.368837 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.368891 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.368901 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.368927 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.368944 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:53Z","lastTransitionTime":"2026-01-28T15:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.472093 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.472145 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.472154 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.472176 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.472190 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:53Z","lastTransitionTime":"2026-01-28T15:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.575569 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.575653 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.575667 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.575690 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.575701 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:53Z","lastTransitionTime":"2026-01-28T15:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.679333 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.679390 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.679408 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.679431 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.679445 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:53Z","lastTransitionTime":"2026-01-28T15:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.782245 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.782301 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.782316 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.782336 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.782352 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:53Z","lastTransitionTime":"2026-01-28T15:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.886044 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.886119 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.886138 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.886166 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.886186 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:53Z","lastTransitionTime":"2026-01-28T15:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.903438 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.903514 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:53 crc kubenswrapper[4871]: E0128 15:18:53.903658 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:53 crc kubenswrapper[4871]: E0128 15:18:53.903914 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.933702 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 08:00:19.502461455 +0000 UTC Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.990288 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.990348 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.990366 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.990392 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:53 crc kubenswrapper[4871]: I0128 15:18:53.990410 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:53Z","lastTransitionTime":"2026-01-28T15:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.093970 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.094044 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.094068 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.094096 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.094118 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:54Z","lastTransitionTime":"2026-01-28T15:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.195849 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.195892 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.195902 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.195919 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.195928 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:54Z","lastTransitionTime":"2026-01-28T15:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.298549 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.298622 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.298650 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.298662 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.298671 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:54Z","lastTransitionTime":"2026-01-28T15:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.401110 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.401140 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.401148 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.401162 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.401172 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:54Z","lastTransitionTime":"2026-01-28T15:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.505226 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.505278 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.505288 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.505304 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.505316 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:54Z","lastTransitionTime":"2026-01-28T15:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.607900 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.608060 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.608081 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.608100 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.608148 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:54Z","lastTransitionTime":"2026-01-28T15:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.710857 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.710890 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.710898 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.710909 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.710918 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:54Z","lastTransitionTime":"2026-01-28T15:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.813884 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.813921 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.813932 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.813945 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.813954 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:54Z","lastTransitionTime":"2026-01-28T15:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.903089 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:54 crc kubenswrapper[4871]: E0128 15:18:54.903533 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.903239 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:54 crc kubenswrapper[4871]: E0128 15:18:54.904098 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.916209 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.916564 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.916805 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.916951 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.917091 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:54Z","lastTransitionTime":"2026-01-28T15:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:54 crc kubenswrapper[4871]: I0128 15:18:54.934102 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 09:58:14.317667481 +0000 UTC Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.020277 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.020663 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.020807 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.020937 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.021054 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:55Z","lastTransitionTime":"2026-01-28T15:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.124464 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.124862 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.125061 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.125253 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.125504 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:55Z","lastTransitionTime":"2026-01-28T15:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.230164 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.230245 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.230267 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.230293 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.230312 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:55Z","lastTransitionTime":"2026-01-28T15:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.333408 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.333462 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.333478 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.333502 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.333537 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:55Z","lastTransitionTime":"2026-01-28T15:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.436831 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.436916 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.436941 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.436972 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.436996 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:55Z","lastTransitionTime":"2026-01-28T15:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.539019 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.539057 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.539067 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.539084 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.539096 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:55Z","lastTransitionTime":"2026-01-28T15:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.642306 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.642380 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.642402 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.642435 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.642461 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:55Z","lastTransitionTime":"2026-01-28T15:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.745642 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.745712 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.745735 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.745766 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.745789 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:55Z","lastTransitionTime":"2026-01-28T15:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.849212 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.849347 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.849373 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.849405 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.849427 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:55Z","lastTransitionTime":"2026-01-28T15:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.903658 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.903970 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:55 crc kubenswrapper[4871]: E0128 15:18:55.904155 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:55 crc kubenswrapper[4871]: E0128 15:18:55.904356 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.934898 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 02:46:50.121096107 +0000 UTC Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.952114 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.952170 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.952187 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.952212 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.952229 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:55Z","lastTransitionTime":"2026-01-28T15:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.954545 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.954677 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.954700 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.954725 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.954743 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:55Z","lastTransitionTime":"2026-01-28T15:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:55 crc kubenswrapper[4871]: E0128 15:18:55.978091 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.983968 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.984346 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.984530 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.984713 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:55 crc kubenswrapper[4871]: I0128 15:18:55.984930 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:55Z","lastTransitionTime":"2026-01-28T15:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:56 crc kubenswrapper[4871]: E0128 15:18:56.004918 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.009579 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.009663 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.009676 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.009715 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.009727 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:56Z","lastTransitionTime":"2026-01-28T15:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:56 crc kubenswrapper[4871]: E0128 15:18:56.026098 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.030300 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.030334 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.030346 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.030361 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.030373 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:56Z","lastTransitionTime":"2026-01-28T15:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:56 crc kubenswrapper[4871]: E0128 15:18:56.045707 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.049961 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.050015 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.050026 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.050047 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.050060 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:56Z","lastTransitionTime":"2026-01-28T15:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:56 crc kubenswrapper[4871]: E0128 15:18:56.063651 4871 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fb24c48-4a41-4f44-93d2-0105f9c98753\\\",\\\"systemUUID\\\":\\\"fc09a3b6-b11a-4dc2-972c-09bf48a77414\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:56 crc kubenswrapper[4871]: E0128 15:18:56.064006 4871 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.065901 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.065949 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.065964 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.065984 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.065999 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:56Z","lastTransitionTime":"2026-01-28T15:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.170235 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.170313 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.170334 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.170358 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.170379 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:56Z","lastTransitionTime":"2026-01-28T15:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.273721 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.273789 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.273803 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.273826 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.273841 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:56Z","lastTransitionTime":"2026-01-28T15:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.377409 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.377492 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.377515 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.377543 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.377567 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:56Z","lastTransitionTime":"2026-01-28T15:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.481066 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.481161 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.481178 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.481204 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.481221 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:56Z","lastTransitionTime":"2026-01-28T15:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.585004 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.585103 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.585132 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.585174 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.585197 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:56Z","lastTransitionTime":"2026-01-28T15:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.688795 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.688888 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.688912 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.688941 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.688964 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:56Z","lastTransitionTime":"2026-01-28T15:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.792419 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.792509 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.792535 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.792576 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.792650 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:56Z","lastTransitionTime":"2026-01-28T15:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.896020 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.896095 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.896117 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.896141 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.896157 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:56Z","lastTransitionTime":"2026-01-28T15:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.903708 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:56 crc kubenswrapper[4871]: E0128 15:18:56.903818 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.903976 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:56 crc kubenswrapper[4871]: E0128 15:18:56.904242 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:56 crc kubenswrapper[4871]: I0128 15:18:56.935930 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 12:29:17.926425326 +0000 UTC Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.000575 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.000668 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.000682 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.000701 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.000719 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:57Z","lastTransitionTime":"2026-01-28T15:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.103542 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.103618 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.103629 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.103646 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.103654 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:57Z","lastTransitionTime":"2026-01-28T15:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.206241 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.206312 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.206330 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.206353 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.206373 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:57Z","lastTransitionTime":"2026-01-28T15:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.308566 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.308630 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.308639 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.308653 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.308665 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:57Z","lastTransitionTime":"2026-01-28T15:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.411424 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.411497 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.411515 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.411537 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.411553 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:57Z","lastTransitionTime":"2026-01-28T15:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.514778 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.515159 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.515373 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.515698 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.515931 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:57Z","lastTransitionTime":"2026-01-28T15:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.618506 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.618565 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.618584 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.618642 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.618659 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:57Z","lastTransitionTime":"2026-01-28T15:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.721577 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.721672 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.721690 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.721713 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.721730 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:57Z","lastTransitionTime":"2026-01-28T15:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.824371 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.824866 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.825094 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.825321 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.825522 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:57Z","lastTransitionTime":"2026-01-28T15:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.903150 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:57 crc kubenswrapper[4871]: E0128 15:18:57.903426 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.903158 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:57 crc kubenswrapper[4871]: E0128 15:18:57.903677 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.928996 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.929059 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.929087 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.929114 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.929131 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:57Z","lastTransitionTime":"2026-01-28T15:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:57 crc kubenswrapper[4871]: I0128 15:18:57.936502 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 01:47:32.347158806 +0000 UTC Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.032205 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.032521 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.032638 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.032755 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.032848 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:58Z","lastTransitionTime":"2026-01-28T15:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.135835 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.135899 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.135918 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.135944 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.135962 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:58Z","lastTransitionTime":"2026-01-28T15:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.238740 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.238807 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.238826 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.238853 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.238871 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:58Z","lastTransitionTime":"2026-01-28T15:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.341775 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.341834 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.341854 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.341881 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.341898 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:58Z","lastTransitionTime":"2026-01-28T15:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.444520 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.444650 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.444676 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.444708 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.444730 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:58Z","lastTransitionTime":"2026-01-28T15:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.547724 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.547800 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.547823 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.547862 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.547888 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:58Z","lastTransitionTime":"2026-01-28T15:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.651530 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.651668 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.651696 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.651727 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.651754 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:58Z","lastTransitionTime":"2026-01-28T15:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.756369 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.756416 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.756426 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.756444 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.756455 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:58Z","lastTransitionTime":"2026-01-28T15:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.859274 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.859347 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.859364 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.859381 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.859391 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:58Z","lastTransitionTime":"2026-01-28T15:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.903290 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.903329 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:18:58 crc kubenswrapper[4871]: E0128 15:18:58.903421 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:18:58 crc kubenswrapper[4871]: E0128 15:18:58.903511 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.914579 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e781ae5d9bb6ee358dd05fb11688c7e382f13f96854443e1d77e482d57edcee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p9bdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7tkqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.928851 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45mlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1955ba7-b91c-41de-97b7-188922cc0907\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c2298a7ba740a339e0cf8710c12bd89e613c3450bf9bbc1fdbf21d93e3da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:37Z\\\",\\\"message\\\":\\\"2026-01-28T15:17:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1db8ba10-58e2-4c2c-b351-ef1e6613bdd9\\\\n2026-01-28T15:17:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1db8ba10-58e2-4c2c-b351-ef1e6613bdd9 to /host/opt/cni/bin/\\\\n2026-01-28T15:17:52Z [verbose] multus-daemon started\\\\n2026-01-28T15:17:52Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:18:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmjgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45mlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.937528 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:56:13.430616134 +0000 UTC Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.942112 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa280ea6-1d32-4098-be1f-b7314f1a0576\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cfcf58f96119fd03a37cc47985518ce33dd48984718cc2968b6ebb43bd6913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd7df8770de3b5fe13ad0ac02c6a85f864af968e4ef04084b30f7f566736090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svgpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7t965\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.956627 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"854bc471-aa40-4adf-9ca0-bc8a5a07d111\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"message\\\":\\\"w=2026-01-28 15:17:49.295659507 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.295763 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295795 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295823 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0128 15:17:49.295838 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0128 15:17:49.295879 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0128 15:17:49.295983 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 15:17:49.295965 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769613469\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769613469\\\\\\\\\\\\\\\" (2026-01-28 14:17:49 +0000 UTC to 2027-01-28 14:17:49 +0000 UTC (now=2026-01-28 15:17:49.295932696 +0000 UTC))\\\\\\\"\\\\nI0128 15:17:49.296018 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 15:17:49.296031 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 15:17:49.296036 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:17:49.296058 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2378323882/tls.crt::/tmp/serving-cert-2378323882/tls.key\\\\\\\"\\\\nI0128 15:17:49.296086 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:17:49.296114 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.961848 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.961877 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.961914 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.961933 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.961944 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:58Z","lastTransitionTime":"2026-01-28T15:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.970573 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479d0f29-9dce-41a2-9b1a-75e157c26a03\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a2644afc15f39effb1daa734bb53944ad9c7e05e72fbf8e8627879b5cc6c473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c31213dea27da9a677ea193b8780cc0e632e1263829731856bf682a3e1853e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd606211ea6e58c66c05dd5ecaf3a0c220b19bc7c82d1ea8d298ae82bd1b675e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a553dce0e11fd395cef308e3ec6756e7670462597b7eb183a4efbb1e0f638398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a553dce0e11fd395cef308e3ec6756e7670462597b7eb183a4efbb1e0f638398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:58 crc kubenswrapper[4871]: I0128 15:18:58.989703 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45c896941c23ce31865d6010647b1417987a9e868e485d5c96bb459b41809613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.004612 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe589be-c3d0-406c-9112-2fed7909283c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7962019ec8fd01089bb09442086e6755abfb4da957877cb4912513c1ef8e6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0acf1eb42db861a8f31f48c6dfa26d0bb4eaccdd55b0701ef0a316acb9cf40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426705cab331853961ccc75903ab2c88ff27b89be3ca02535a7dedfd01efb741\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://195b215c8a43e394e1e8ba164856fed7cc4587372f66417393364b0bed54bf1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b9e5a91b9ad4dbf40ee8b50dae7278f7c412d8cfda457ac8d4e784cea272ef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb8b0c1362039ee780b2ff4da3d176dd4835900b43064e8fa85cb4ff484203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89589ff6e26fd70f0e505535bba8485a7ea037cb93be2a330899f1202030d1d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rz9nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.018780 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqb64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4db608d9-2e01-48ed-9a1c-ccedc49f414e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9241e605af5d75f72d7e925cb5f4275f02519ddfd7dc3e5f27d81534af38d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77475\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqb64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.029928 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jp46k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64aa044d-1eb6-4e5f-9c12-96ba346374fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrnj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jp46k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.042490 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9021abaa-911b-4eff-94d7-319546d60422\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1efc7e9400c3dfe89bbbdb0b78c108d7a5d013c433d5b62f53fe338f5d49b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ff221c97a59e61461b6518db7c1e3b30bf02f544c14f247fa74ceea7317d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0ff221c97a59e61461b6518db7c1e3b30bf02f544c14f247fa74ceea7317d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.055806 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.068461 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.068519 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.068540 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.068562 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.068576 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:59Z","lastTransitionTime":"2026-01-28T15:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.072254 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.083252 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q868d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7f18c5-9c7c-483b-a476-470975bb1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30252094c6067e08bd4edf580e937f877cee626000158d4e718551641da628f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drx5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q868d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.102833 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"178343c8-b657-4440-953e-6daef3609145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0c2a2b38d9ad0ac6e4219b2ad68723992f3980697ca73508a0956647e966da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d0c2a2b38d9ad0ac6e4219b2ad68723992f3980697ca73508a0956647e966da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:18:43Z\\\",\\\"message\\\":\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0128 15:18:43.861070 6948 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:43Z is after 2025-08-24T17:21:41Z]\\\\nI0128 15:18:43.861096 6948 model_cli\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fn5bb_openshift-ovn-kubernetes(178343c8-b657-4440-953e-6daef3609145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtss7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fn5bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.125717 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56247597-7568-4a98-b258-fb5941ab4d2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d273837861a3eae70081889de5611dc5328a91ffe22c463d4f39c0e3b8f805c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5113aff82b28b2bf9ee05e8273c1c48d3efb4cc97f9d535b056ccf73d9f6ed47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e576b1a1bfb2a16813d5857c447bae4be553f206f7b105d058a91d497cd772b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3e0bf9fb80126d30ac9dfd945915ce92369b4e4fdc28985d42fede1be2911f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://856cced87e3edd16482d4ff390f50e49710dd52638439b0282fe872b287ecb0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e23768cd66f1fda414eae6405e23eb60a90bc0a02fe02122b23ef467733144fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ef00e713378c718f673b18fcb4fc021316888872e8f436ba6ee0d590ddd92d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18a42abbce1992195981592bdfe41a3959fe149f4c46aa4e6971f2d4cfc096a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.138911 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4086ef-d74f-4def-a51d-f621702b91a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91de31187f09943342f5b7715cab47adb85847386cd33dce6cd27aed49702c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badf4a8463e96297ec90be8a16c7523275ceccb3775139e77458d09c5e6f934\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b781fdcdbae55cdd7843985cae9ed06fd400232d693eb6845771e527ced5d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:17:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.157086 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3a6ca23759ccbb87c268b06e99c231daea375bae79d6c17a6d0cc747cb92f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42accd13fe7c3d9441c13fdcfe347aba143acbb2dd7bd83bb10df5501a107c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.172052 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.172098 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.172115 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.172156 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.172231 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:59Z","lastTransitionTime":"2026-01-28T15:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.174156 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1847c62b3ec74aa7da0b6be3046e0cedbba2dae30f832d1f7ce44a6210a759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.191033 4871 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:17:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:18:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.276896 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.276940 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.276952 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.276970 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.276981 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:59Z","lastTransitionTime":"2026-01-28T15:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.380570 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.380690 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.380714 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.380741 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.380760 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:59Z","lastTransitionTime":"2026-01-28T15:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.484195 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.484253 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.484266 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.484284 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.484296 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:59Z","lastTransitionTime":"2026-01-28T15:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.586419 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.586456 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.586465 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.586480 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.586491 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:59Z","lastTransitionTime":"2026-01-28T15:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.690096 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.690140 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.690153 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.690171 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.690184 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:59Z","lastTransitionTime":"2026-01-28T15:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.792777 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.792849 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.792873 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.792900 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.792923 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:59Z","lastTransitionTime":"2026-01-28T15:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.895122 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.895163 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.895175 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.895188 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.895197 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:59Z","lastTransitionTime":"2026-01-28T15:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.903387 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.903387 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:18:59 crc kubenswrapper[4871]: E0128 15:18:59.903488 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:18:59 crc kubenswrapper[4871]: E0128 15:18:59.903567 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.937716 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 14:35:30.947217772 +0000 UTC Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.998781 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.998841 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.998858 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.998881 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:18:59 crc kubenswrapper[4871]: I0128 15:18:59.998898 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:18:59Z","lastTransitionTime":"2026-01-28T15:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.101642 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.101711 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.101730 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.101755 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.101777 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:00Z","lastTransitionTime":"2026-01-28T15:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.204990 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.205054 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.205075 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.205100 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.205117 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:00Z","lastTransitionTime":"2026-01-28T15:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.308160 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.308239 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.308260 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.308284 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.308300 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:00Z","lastTransitionTime":"2026-01-28T15:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.412223 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.412298 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.412323 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.412354 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.412377 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:00Z","lastTransitionTime":"2026-01-28T15:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.515210 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.515264 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.515289 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.515319 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.515345 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:00Z","lastTransitionTime":"2026-01-28T15:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.617890 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.617948 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.617965 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.617990 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.618008 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:00Z","lastTransitionTime":"2026-01-28T15:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.721462 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.721524 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.721545 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.721569 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.721612 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:00Z","lastTransitionTime":"2026-01-28T15:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.824285 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.824351 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.824376 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.824408 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.824432 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:00Z","lastTransitionTime":"2026-01-28T15:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.903884 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.904058 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:19:00 crc kubenswrapper[4871]: E0128 15:19:00.904056 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:00 crc kubenswrapper[4871]: E0128 15:19:00.904221 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.905366 4871 scope.go:117] "RemoveContainer" containerID="7d0c2a2b38d9ad0ac6e4219b2ad68723992f3980697ca73508a0956647e966da" Jan 28 15:19:00 crc kubenswrapper[4871]: E0128 15:19:00.905923 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fn5bb_openshift-ovn-kubernetes(178343c8-b657-4440-953e-6daef3609145)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" podUID="178343c8-b657-4440-953e-6daef3609145" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.926853 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.926997 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.927023 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.927052 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.927073 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:00Z","lastTransitionTime":"2026-01-28T15:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:00 crc kubenswrapper[4871]: I0128 15:19:00.938648 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 00:07:40.837994826 +0000 UTC Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.030094 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.030179 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.030196 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.030221 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.030235 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:01Z","lastTransitionTime":"2026-01-28T15:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.134041 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.134113 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.134130 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.134156 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.134174 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:01Z","lastTransitionTime":"2026-01-28T15:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.237081 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.237120 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.237130 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.237145 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.237154 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:01Z","lastTransitionTime":"2026-01-28T15:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.340111 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.340160 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.340172 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.340188 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.340199 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:01Z","lastTransitionTime":"2026-01-28T15:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.443155 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.443227 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.443246 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.443269 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.443283 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:01Z","lastTransitionTime":"2026-01-28T15:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.546259 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.546361 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.546386 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.546415 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.546437 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:01Z","lastTransitionTime":"2026-01-28T15:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.649319 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.649399 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.649420 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.649447 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.649466 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:01Z","lastTransitionTime":"2026-01-28T15:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.751947 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.751988 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.751997 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.752014 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.752024 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:01Z","lastTransitionTime":"2026-01-28T15:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.855129 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.855183 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.855197 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.855224 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.855247 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:01Z","lastTransitionTime":"2026-01-28T15:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.903876 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.903931 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:01 crc kubenswrapper[4871]: E0128 15:19:01.904263 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:01 crc kubenswrapper[4871]: E0128 15:19:01.904370 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.938916 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 18:58:38.919046882 +0000 UTC Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.958846 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.958911 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.958925 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.958949 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:01 crc kubenswrapper[4871]: I0128 15:19:01.958969 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:01Z","lastTransitionTime":"2026-01-28T15:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.062904 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.063037 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.063059 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.063085 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.063102 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:02Z","lastTransitionTime":"2026-01-28T15:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.166086 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.166155 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.166175 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.166200 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.166220 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:02Z","lastTransitionTime":"2026-01-28T15:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.270235 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.270310 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.270325 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.270344 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.270378 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:02Z","lastTransitionTime":"2026-01-28T15:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.372891 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.372924 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.372932 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.372944 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.372956 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:02Z","lastTransitionTime":"2026-01-28T15:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.475460 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.475542 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.475570 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.475627 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.475647 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:02Z","lastTransitionTime":"2026-01-28T15:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.578892 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.578937 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.578957 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.578978 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.578995 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:02Z","lastTransitionTime":"2026-01-28T15:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.682621 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.682684 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.682702 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.682723 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.682739 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:02Z","lastTransitionTime":"2026-01-28T15:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.785525 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.785584 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.785634 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.785659 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.785677 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:02Z","lastTransitionTime":"2026-01-28T15:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.889245 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.889316 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.889334 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.889357 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.889372 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:02Z","lastTransitionTime":"2026-01-28T15:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.903139 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.903214 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:02 crc kubenswrapper[4871]: E0128 15:19:02.903367 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:19:02 crc kubenswrapper[4871]: E0128 15:19:02.903471 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.939655 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 13:30:43.618316833 +0000 UTC Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.992253 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.992309 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.992326 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.992346 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:02 crc kubenswrapper[4871]: I0128 15:19:02.992363 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:02Z","lastTransitionTime":"2026-01-28T15:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.095696 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.095748 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.095764 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.095786 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.095804 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:03Z","lastTransitionTime":"2026-01-28T15:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.199059 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.199120 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.199136 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.199159 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.199176 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:03Z","lastTransitionTime":"2026-01-28T15:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.302281 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.302358 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.302385 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.302416 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.302442 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:03Z","lastTransitionTime":"2026-01-28T15:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.405806 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.405845 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.405856 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.405870 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.405880 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:03Z","lastTransitionTime":"2026-01-28T15:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.509489 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.509551 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.509575 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.509659 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.509683 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:03Z","lastTransitionTime":"2026-01-28T15:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.612860 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.612929 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.612949 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.612977 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.613001 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:03Z","lastTransitionTime":"2026-01-28T15:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.716354 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.716434 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.716458 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.716488 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.716509 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:03Z","lastTransitionTime":"2026-01-28T15:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.819709 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.819789 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.819807 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.819834 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.819852 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:03Z","lastTransitionTime":"2026-01-28T15:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.903759 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:03 crc kubenswrapper[4871]: E0128 15:19:03.903954 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.903769 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:03 crc kubenswrapper[4871]: E0128 15:19:03.904499 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.922398 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.922457 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.922472 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.922495 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.922509 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:03Z","lastTransitionTime":"2026-01-28T15:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:03 crc kubenswrapper[4871]: I0128 15:19:03.940081 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 13:08:41.926822757 +0000 UTC Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.025673 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.025726 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.025738 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.025757 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.025769 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:04Z","lastTransitionTime":"2026-01-28T15:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.128930 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.129005 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.129030 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.129061 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.129082 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:04Z","lastTransitionTime":"2026-01-28T15:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.232201 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.232266 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.232286 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.232313 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.232329 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:04Z","lastTransitionTime":"2026-01-28T15:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.335383 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.335463 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.335487 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.335516 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.335539 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:04Z","lastTransitionTime":"2026-01-28T15:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.438529 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.438613 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.438631 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.438656 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.438674 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:04Z","lastTransitionTime":"2026-01-28T15:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.541059 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.541132 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.541158 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.541190 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.541210 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:04Z","lastTransitionTime":"2026-01-28T15:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.644547 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.644643 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.644664 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.644689 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.644706 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:04Z","lastTransitionTime":"2026-01-28T15:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.747442 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.747512 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.747536 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.747567 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.747622 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:04Z","lastTransitionTime":"2026-01-28T15:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.851553 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.851636 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.851658 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.851682 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.851698 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:04Z","lastTransitionTime":"2026-01-28T15:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.903784 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.903787 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:19:04 crc kubenswrapper[4871]: E0128 15:19:04.903965 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:04 crc kubenswrapper[4871]: E0128 15:19:04.904143 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.940567 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 11:48:06.041367017 +0000 UTC Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.953872 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.953930 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.953948 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.953973 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:04 crc kubenswrapper[4871]: I0128 15:19:04.953992 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:04Z","lastTransitionTime":"2026-01-28T15:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.057911 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.057968 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.057985 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.058011 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.058029 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:05Z","lastTransitionTime":"2026-01-28T15:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.161333 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.161391 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.161407 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.161431 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.161449 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:05Z","lastTransitionTime":"2026-01-28T15:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.265159 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.265213 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.265230 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.265252 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.265269 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:05Z","lastTransitionTime":"2026-01-28T15:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.368015 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.368057 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.368072 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.368093 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.368112 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:05Z","lastTransitionTime":"2026-01-28T15:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.471368 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.471432 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.471450 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.471475 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.471492 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:05Z","lastTransitionTime":"2026-01-28T15:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.574754 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.574822 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.574841 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.574866 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.574885 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:05Z","lastTransitionTime":"2026-01-28T15:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.678557 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.678675 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.678700 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.678733 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.678762 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:05Z","lastTransitionTime":"2026-01-28T15:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.781530 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.781668 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.781688 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.781711 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.781728 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:05Z","lastTransitionTime":"2026-01-28T15:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.884463 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.884540 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.884562 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.884625 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.884645 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:05Z","lastTransitionTime":"2026-01-28T15:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.903932 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.903930 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:05 crc kubenswrapper[4871]: E0128 15:19:05.904184 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:05 crc kubenswrapper[4871]: E0128 15:19:05.904306 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.941078 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 00:22:28.377155213 +0000 UTC Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.988009 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.988075 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.988094 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.988119 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:05 crc kubenswrapper[4871]: I0128 15:19:05.988137 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:05Z","lastTransitionTime":"2026-01-28T15:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.090756 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.090812 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.090834 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.090862 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.090892 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:06Z","lastTransitionTime":"2026-01-28T15:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.195078 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.195210 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.195231 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.195254 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.195272 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:06Z","lastTransitionTime":"2026-01-28T15:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.216508 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.216628 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.216654 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.216686 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.216711 4871 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:06Z","lastTransitionTime":"2026-01-28T15:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.285634 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-z77l9"] Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.286461 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z77l9" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.289530 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.289804 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.290090 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.290276 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.315603 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-pqb64" podStartSLOduration=77.315558353 podStartE2EDuration="1m17.315558353s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:06.304377355 +0000 UTC m=+98.200215687" watchObservedRunningTime="2026-01-28 15:19:06.315558353 +0000 UTC m=+98.211396695" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.341291 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=76.341265415 podStartE2EDuration="1m16.341265415s" podCreationTimestamp="2026-01-28 15:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:06.340619946 +0000 UTC m=+98.236458268" watchObservedRunningTime="2026-01-28 15:19:06.341265415 +0000 UTC m=+98.237103747" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.374632 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=48.374612245 podStartE2EDuration="48.374612245s" podCreationTimestamp="2026-01-28 15:18:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:06.35809147 +0000 UTC m=+98.253929812" watchObservedRunningTime="2026-01-28 15:19:06.374612245 +0000 UTC m=+98.270450557" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.394618 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rz9nh" podStartSLOduration=77.394580378 podStartE2EDuration="1m17.394580378s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:06.394160835 +0000 UTC m=+98.289999157" watchObservedRunningTime="2026-01-28 15:19:06.394580378 +0000 UTC m=+98.290418720" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.394810 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dd051371-6543-4d7f-bcd8-5041c5e45dbd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-z77l9\" (UID: \"dd051371-6543-4d7f-bcd8-5041c5e45dbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z77l9" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.394870 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd051371-6543-4d7f-bcd8-5041c5e45dbd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-z77l9\" (UID: \"dd051371-6543-4d7f-bcd8-5041c5e45dbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z77l9" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.394904 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd051371-6543-4d7f-bcd8-5041c5e45dbd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-z77l9\" (UID: \"dd051371-6543-4d7f-bcd8-5041c5e45dbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z77l9" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.395009 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dd051371-6543-4d7f-bcd8-5041c5e45dbd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-z77l9\" (UID: \"dd051371-6543-4d7f-bcd8-5041c5e45dbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z77l9" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.395076 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dd051371-6543-4d7f-bcd8-5041c5e45dbd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-z77l9\" (UID: \"dd051371-6543-4d7f-bcd8-5041c5e45dbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z77l9" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.440110 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=33.440091017 podStartE2EDuration="33.440091017s" podCreationTimestamp="2026-01-28 15:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:06.427490815 +0000 UTC m=+98.323329137" watchObservedRunningTime="2026-01-28 15:19:06.440091017 +0000 UTC m=+98.335929349" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.465644 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-q868d" podStartSLOduration=78.465627614 podStartE2EDuration="1m18.465627614s" podCreationTimestamp="2026-01-28 15:17:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:06.465453658 +0000 UTC m=+98.361291980" watchObservedRunningTime="2026-01-28 15:19:06.465627614 +0000 UTC m=+98.361465936" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.496201 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dd051371-6543-4d7f-bcd8-5041c5e45dbd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-z77l9\" (UID: \"dd051371-6543-4d7f-bcd8-5041c5e45dbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z77l9" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.496264 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dd051371-6543-4d7f-bcd8-5041c5e45dbd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-z77l9\" (UID: \"dd051371-6543-4d7f-bcd8-5041c5e45dbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z77l9" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.496311 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dd051371-6543-4d7f-bcd8-5041c5e45dbd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-z77l9\" (UID: \"dd051371-6543-4d7f-bcd8-5041c5e45dbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z77l9" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.496334 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd051371-6543-4d7f-bcd8-5041c5e45dbd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-z77l9\" (UID: \"dd051371-6543-4d7f-bcd8-5041c5e45dbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z77l9" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.496353 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd051371-6543-4d7f-bcd8-5041c5e45dbd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-z77l9\" (UID: \"dd051371-6543-4d7f-bcd8-5041c5e45dbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z77l9" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.496418 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dd051371-6543-4d7f-bcd8-5041c5e45dbd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-z77l9\" (UID: \"dd051371-6543-4d7f-bcd8-5041c5e45dbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z77l9" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.496414 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dd051371-6543-4d7f-bcd8-5041c5e45dbd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-z77l9\" (UID: \"dd051371-6543-4d7f-bcd8-5041c5e45dbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z77l9" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.497239 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dd051371-6543-4d7f-bcd8-5041c5e45dbd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-z77l9\" (UID: \"dd051371-6543-4d7f-bcd8-5041c5e45dbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z77l9" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.502794 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd051371-6543-4d7f-bcd8-5041c5e45dbd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-z77l9\" (UID: \"dd051371-6543-4d7f-bcd8-5041c5e45dbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z77l9" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.511025 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=77.510998329 podStartE2EDuration="1m17.510998329s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:06.508904383 +0000 UTC m=+98.404742705" watchObservedRunningTime="2026-01-28 15:19:06.510998329 +0000 UTC m=+98.406836661" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.514388 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd051371-6543-4d7f-bcd8-5041c5e45dbd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-z77l9\" (UID: \"dd051371-6543-4d7f-bcd8-5041c5e45dbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z77l9" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.529309 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=77.529290889 podStartE2EDuration="1m17.529290889s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:06.528937928 +0000 UTC m=+98.424776250" watchObservedRunningTime="2026-01-28 15:19:06.529290889 +0000 UTC m=+98.425129211" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.580743 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podStartSLOduration=77.580715393 podStartE2EDuration="1m17.580715393s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:06.580653691 +0000 UTC m=+98.476492023" watchObservedRunningTime="2026-01-28 15:19:06.580715393 +0000 UTC m=+98.476553725" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.608807 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7t965" podStartSLOduration=77.608778488 podStartE2EDuration="1m17.608778488s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:06.607672973 +0000 UTC m=+98.503511305" watchObservedRunningTime="2026-01-28 15:19:06.608778488 +0000 UTC m=+98.504616830" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.609495 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-45mlg" podStartSLOduration=77.60948216 podStartE2EDuration="1m17.60948216s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:06.595084011 +0000 UTC m=+98.490922343" watchObservedRunningTime="2026-01-28 15:19:06.60948216 +0000 UTC m=+98.505320502" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.609948 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z77l9" Jan 28 15:19:06 crc kubenswrapper[4871]: W0128 15:19:06.636012 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd051371_6543_4d7f_bcd8_5041c5e45dbd.slice/crio-7daee25ae6109e50190af244218ee29309191fb0774cf323f750b68a98ec6bf3 WatchSource:0}: Error finding container 7daee25ae6109e50190af244218ee29309191fb0774cf323f750b68a98ec6bf3: Status 404 returned error can't find the container with id 7daee25ae6109e50190af244218ee29309191fb0774cf323f750b68a98ec6bf3 Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.903824 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:06 crc kubenswrapper[4871]: E0128 15:19:06.904034 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.904432 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:19:06 crc kubenswrapper[4871]: E0128 15:19:06.904917 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.941984 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 02:34:50.515300155 +0000 UTC Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.942080 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 28 15:19:06 crc kubenswrapper[4871]: I0128 15:19:06.951228 4871 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 15:19:07 crc kubenswrapper[4871]: I0128 15:19:07.203544 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs\") pod \"network-metrics-daemon-jp46k\" (UID: \"64aa044d-1eb6-4e5f-9c12-96ba346374fa\") " pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:19:07 crc kubenswrapper[4871]: E0128 15:19:07.203848 4871 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:07 crc kubenswrapper[4871]: E0128 15:19:07.204008 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs podName:64aa044d-1eb6-4e5f-9c12-96ba346374fa nodeName:}" failed. No retries permitted until 2026-01-28 15:20:11.203952388 +0000 UTC m=+163.099790740 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs") pod "network-metrics-daemon-jp46k" (UID: "64aa044d-1eb6-4e5f-9c12-96ba346374fa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:07 crc kubenswrapper[4871]: I0128 15:19:07.557987 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z77l9" event={"ID":"dd051371-6543-4d7f-bcd8-5041c5e45dbd","Type":"ContainerStarted","Data":"68f0bcec939764900ea13c92ad18a7f60936db9bb8ddf00deaa2c97545c47584"} Jan 28 15:19:07 crc kubenswrapper[4871]: I0128 15:19:07.558070 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z77l9" event={"ID":"dd051371-6543-4d7f-bcd8-5041c5e45dbd","Type":"ContainerStarted","Data":"7daee25ae6109e50190af244218ee29309191fb0774cf323f750b68a98ec6bf3"} Jan 28 15:19:07 crc kubenswrapper[4871]: I0128 15:19:07.903098 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:07 crc kubenswrapper[4871]: E0128 15:19:07.903275 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:07 crc kubenswrapper[4871]: I0128 15:19:07.903614 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:07 crc kubenswrapper[4871]: E0128 15:19:07.903708 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:08 crc kubenswrapper[4871]: I0128 15:19:08.903105 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:08 crc kubenswrapper[4871]: I0128 15:19:08.903266 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:19:08 crc kubenswrapper[4871]: E0128 15:19:08.904898 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:08 crc kubenswrapper[4871]: E0128 15:19:08.905263 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:19:09 crc kubenswrapper[4871]: I0128 15:19:09.903041 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:09 crc kubenswrapper[4871]: I0128 15:19:09.903147 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:09 crc kubenswrapper[4871]: E0128 15:19:09.903181 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:09 crc kubenswrapper[4871]: E0128 15:19:09.903339 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:10 crc kubenswrapper[4871]: I0128 15:19:10.903383 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:19:10 crc kubenswrapper[4871]: I0128 15:19:10.903425 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:10 crc kubenswrapper[4871]: E0128 15:19:10.903843 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:10 crc kubenswrapper[4871]: E0128 15:19:10.903960 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:19:11 crc kubenswrapper[4871]: I0128 15:19:11.903542 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:11 crc kubenswrapper[4871]: I0128 15:19:11.903625 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:11 crc kubenswrapper[4871]: E0128 15:19:11.903780 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:11 crc kubenswrapper[4871]: E0128 15:19:11.904085 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:12 crc kubenswrapper[4871]: I0128 15:19:12.903905 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:19:12 crc kubenswrapper[4871]: I0128 15:19:12.904076 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:12 crc kubenswrapper[4871]: E0128 15:19:12.904177 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:19:12 crc kubenswrapper[4871]: E0128 15:19:12.904253 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:13 crc kubenswrapper[4871]: I0128 15:19:13.903259 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:13 crc kubenswrapper[4871]: I0128 15:19:13.903291 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:13 crc kubenswrapper[4871]: E0128 15:19:13.903389 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:13 crc kubenswrapper[4871]: E0128 15:19:13.903453 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:14 crc kubenswrapper[4871]: I0128 15:19:14.903503 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:19:14 crc kubenswrapper[4871]: E0128 15:19:14.903773 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:19:14 crc kubenswrapper[4871]: I0128 15:19:14.903843 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:14 crc kubenswrapper[4871]: E0128 15:19:14.904015 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:15 crc kubenswrapper[4871]: I0128 15:19:15.903006 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:15 crc kubenswrapper[4871]: I0128 15:19:15.903101 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:15 crc kubenswrapper[4871]: E0128 15:19:15.903203 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:15 crc kubenswrapper[4871]: E0128 15:19:15.903677 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:15 crc kubenswrapper[4871]: I0128 15:19:15.904861 4871 scope.go:117] "RemoveContainer" containerID="7d0c2a2b38d9ad0ac6e4219b2ad68723992f3980697ca73508a0956647e966da" Jan 28 15:19:15 crc kubenswrapper[4871]: E0128 15:19:15.905126 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fn5bb_openshift-ovn-kubernetes(178343c8-b657-4440-953e-6daef3609145)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" podUID="178343c8-b657-4440-953e-6daef3609145" Jan 28 15:19:16 crc kubenswrapper[4871]: I0128 15:19:16.903958 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:19:16 crc kubenswrapper[4871]: E0128 15:19:16.904278 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:19:16 crc kubenswrapper[4871]: I0128 15:19:16.904567 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:16 crc kubenswrapper[4871]: E0128 15:19:16.904725 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:17 crc kubenswrapper[4871]: I0128 15:19:17.902894 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:17 crc kubenswrapper[4871]: I0128 15:19:17.902964 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:17 crc kubenswrapper[4871]: E0128 15:19:17.903556 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:17 crc kubenswrapper[4871]: E0128 15:19:17.903750 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:18 crc kubenswrapper[4871]: I0128 15:19:18.903857 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:18 crc kubenswrapper[4871]: I0128 15:19:18.904870 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:19:18 crc kubenswrapper[4871]: E0128 15:19:18.906128 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:18 crc kubenswrapper[4871]: E0128 15:19:18.906295 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:19:19 crc kubenswrapper[4871]: I0128 15:19:19.903480 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:19 crc kubenswrapper[4871]: I0128 15:19:19.903490 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:19 crc kubenswrapper[4871]: E0128 15:19:19.903890 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:19 crc kubenswrapper[4871]: E0128 15:19:19.904105 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:20 crc kubenswrapper[4871]: I0128 15:19:20.902948 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:20 crc kubenswrapper[4871]: I0128 15:19:20.903364 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:19:20 crc kubenswrapper[4871]: E0128 15:19:20.903403 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:20 crc kubenswrapper[4871]: E0128 15:19:20.903680 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:19:21 crc kubenswrapper[4871]: I0128 15:19:21.904030 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:21 crc kubenswrapper[4871]: I0128 15:19:21.904068 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:21 crc kubenswrapper[4871]: E0128 15:19:21.904307 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:21 crc kubenswrapper[4871]: E0128 15:19:21.904664 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:22 crc kubenswrapper[4871]: I0128 15:19:22.903201 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:19:22 crc kubenswrapper[4871]: I0128 15:19:22.903310 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:22 crc kubenswrapper[4871]: E0128 15:19:22.903446 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:19:22 crc kubenswrapper[4871]: E0128 15:19:22.903527 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:23 crc kubenswrapper[4871]: I0128 15:19:23.615352 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-45mlg_d1955ba7-b91c-41de-97b7-188922cc0907/kube-multus/1.log" Jan 28 15:19:23 crc kubenswrapper[4871]: I0128 15:19:23.615810 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-45mlg_d1955ba7-b91c-41de-97b7-188922cc0907/kube-multus/0.log" Jan 28 15:19:23 crc kubenswrapper[4871]: I0128 15:19:23.615856 4871 generic.go:334] "Generic (PLEG): container finished" podID="d1955ba7-b91c-41de-97b7-188922cc0907" containerID="27c2298a7ba740a339e0cf8710c12bd89e613c3450bf9bbc1fdbf21d93e3da41" exitCode=1 Jan 28 15:19:23 crc kubenswrapper[4871]: I0128 15:19:23.615890 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-45mlg" event={"ID":"d1955ba7-b91c-41de-97b7-188922cc0907","Type":"ContainerDied","Data":"27c2298a7ba740a339e0cf8710c12bd89e613c3450bf9bbc1fdbf21d93e3da41"} Jan 28 15:19:23 crc kubenswrapper[4871]: I0128 15:19:23.615929 4871 scope.go:117] "RemoveContainer" containerID="7725c21d8555a97bbff2ee0b956fbc8d4640da3f12c25f3e2ba0500cb69b80eb" Jan 28 15:19:23 crc kubenswrapper[4871]: I0128 15:19:23.616409 4871 scope.go:117] "RemoveContainer" containerID="27c2298a7ba740a339e0cf8710c12bd89e613c3450bf9bbc1fdbf21d93e3da41" Jan 28 15:19:23 crc kubenswrapper[4871]: E0128 15:19:23.616662 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-45mlg_openshift-multus(d1955ba7-b91c-41de-97b7-188922cc0907)\"" pod="openshift-multus/multus-45mlg" podUID="d1955ba7-b91c-41de-97b7-188922cc0907" Jan 28 15:19:23 crc kubenswrapper[4871]: I0128 15:19:23.633708 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z77l9" podStartSLOduration=95.633693891 podStartE2EDuration="1m35.633693891s" podCreationTimestamp="2026-01-28 15:17:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:07.581378998 +0000 UTC m=+99.477217350" watchObservedRunningTime="2026-01-28 15:19:23.633693891 +0000 UTC m=+115.529532213" Jan 28 15:19:23 crc kubenswrapper[4871]: I0128 15:19:23.903242 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:23 crc kubenswrapper[4871]: I0128 15:19:23.903348 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:23 crc kubenswrapper[4871]: E0128 15:19:23.903377 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:23 crc kubenswrapper[4871]: E0128 15:19:23.903553 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:24 crc kubenswrapper[4871]: I0128 15:19:24.623119 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-45mlg_d1955ba7-b91c-41de-97b7-188922cc0907/kube-multus/1.log" Jan 28 15:19:24 crc kubenswrapper[4871]: I0128 15:19:24.903371 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:19:24 crc kubenswrapper[4871]: E0128 15:19:24.903644 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:19:24 crc kubenswrapper[4871]: I0128 15:19:24.903726 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:24 crc kubenswrapper[4871]: E0128 15:19:24.903958 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:25 crc kubenswrapper[4871]: I0128 15:19:25.902925 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:25 crc kubenswrapper[4871]: I0128 15:19:25.903025 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:25 crc kubenswrapper[4871]: E0128 15:19:25.903096 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:25 crc kubenswrapper[4871]: E0128 15:19:25.903216 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:26 crc kubenswrapper[4871]: I0128 15:19:26.903493 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:19:26 crc kubenswrapper[4871]: I0128 15:19:26.903668 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:26 crc kubenswrapper[4871]: E0128 15:19:26.904081 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:19:26 crc kubenswrapper[4871]: E0128 15:19:26.904216 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:27 crc kubenswrapper[4871]: I0128 15:19:27.903368 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:27 crc kubenswrapper[4871]: I0128 15:19:27.903368 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:27 crc kubenswrapper[4871]: E0128 15:19:27.903637 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:27 crc kubenswrapper[4871]: E0128 15:19:27.903793 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:28 crc kubenswrapper[4871]: E0128 15:19:28.850019 4871 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 28 15:19:28 crc kubenswrapper[4871]: I0128 15:19:28.902897 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:19:28 crc kubenswrapper[4871]: I0128 15:19:28.902991 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:28 crc kubenswrapper[4871]: E0128 15:19:28.904228 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:19:28 crc kubenswrapper[4871]: E0128 15:19:28.904559 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:28 crc kubenswrapper[4871]: I0128 15:19:28.904975 4871 scope.go:117] "RemoveContainer" containerID="7d0c2a2b38d9ad0ac6e4219b2ad68723992f3980697ca73508a0956647e966da" Jan 28 15:19:29 crc kubenswrapper[4871]: E0128 15:19:29.005692 4871 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:19:29 crc kubenswrapper[4871]: I0128 15:19:29.659825 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovnkube-controller/3.log" Jan 28 15:19:29 crc kubenswrapper[4871]: I0128 15:19:29.663067 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerStarted","Data":"7112857361fc94a800dbe93a58be9c315eea4e68600708fd11b6a78854fae1b7"} Jan 28 15:19:29 crc kubenswrapper[4871]: I0128 15:19:29.663492 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:19:29 crc kubenswrapper[4871]: I0128 15:19:29.770299 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" podStartSLOduration=100.770267908 podStartE2EDuration="1m40.770267908s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:29.696116205 +0000 UTC m=+121.591954537" watchObservedRunningTime="2026-01-28 15:19:29.770267908 +0000 UTC m=+121.666106270" Jan 28 15:19:29 crc kubenswrapper[4871]: I0128 15:19:29.772889 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jp46k"] Jan 28 15:19:29 crc kubenswrapper[4871]: I0128 15:19:29.773281 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:19:29 crc kubenswrapper[4871]: E0128 15:19:29.773686 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:19:29 crc kubenswrapper[4871]: I0128 15:19:29.903734 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:29 crc kubenswrapper[4871]: I0128 15:19:29.903792 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:29 crc kubenswrapper[4871]: E0128 15:19:29.903863 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:29 crc kubenswrapper[4871]: E0128 15:19:29.903935 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:30 crc kubenswrapper[4871]: I0128 15:19:30.903337 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:19:30 crc kubenswrapper[4871]: I0128 15:19:30.903394 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:30 crc kubenswrapper[4871]: E0128 15:19:30.903510 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:19:30 crc kubenswrapper[4871]: E0128 15:19:30.903708 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:31 crc kubenswrapper[4871]: I0128 15:19:31.903247 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:31 crc kubenswrapper[4871]: I0128 15:19:31.903295 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:31 crc kubenswrapper[4871]: E0128 15:19:31.903437 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:31 crc kubenswrapper[4871]: E0128 15:19:31.903566 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:32 crc kubenswrapper[4871]: I0128 15:19:32.903510 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:19:32 crc kubenswrapper[4871]: I0128 15:19:32.903562 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:32 crc kubenswrapper[4871]: E0128 15:19:32.903794 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:19:32 crc kubenswrapper[4871]: E0128 15:19:32.903887 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:33 crc kubenswrapper[4871]: I0128 15:19:33.903403 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:33 crc kubenswrapper[4871]: I0128 15:19:33.903448 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:33 crc kubenswrapper[4871]: E0128 15:19:33.903558 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:33 crc kubenswrapper[4871]: E0128 15:19:33.903717 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:34 crc kubenswrapper[4871]: E0128 15:19:34.007267 4871 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:19:34 crc kubenswrapper[4871]: I0128 15:19:34.904007 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:34 crc kubenswrapper[4871]: I0128 15:19:34.904008 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:19:34 crc kubenswrapper[4871]: E0128 15:19:34.904126 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:34 crc kubenswrapper[4871]: I0128 15:19:34.904488 4871 scope.go:117] "RemoveContainer" containerID="27c2298a7ba740a339e0cf8710c12bd89e613c3450bf9bbc1fdbf21d93e3da41" Jan 28 15:19:34 crc kubenswrapper[4871]: E0128 15:19:34.904518 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:19:35 crc kubenswrapper[4871]: I0128 15:19:35.691543 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-45mlg_d1955ba7-b91c-41de-97b7-188922cc0907/kube-multus/1.log" Jan 28 15:19:35 crc kubenswrapper[4871]: I0128 15:19:35.692008 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-45mlg" event={"ID":"d1955ba7-b91c-41de-97b7-188922cc0907","Type":"ContainerStarted","Data":"fece7b3fc90f5bd50df75ec40f6120278e747875e17b225537e037d57b4eed3f"} Jan 28 15:19:35 crc kubenswrapper[4871]: I0128 15:19:35.903303 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:35 crc kubenswrapper[4871]: I0128 15:19:35.903343 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:35 crc kubenswrapper[4871]: E0128 15:19:35.903469 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:35 crc kubenswrapper[4871]: E0128 15:19:35.903551 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:36 crc kubenswrapper[4871]: I0128 15:19:36.903781 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:19:36 crc kubenswrapper[4871]: I0128 15:19:36.903813 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:36 crc kubenswrapper[4871]: E0128 15:19:36.903978 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:19:36 crc kubenswrapper[4871]: E0128 15:19:36.904068 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:37 crc kubenswrapper[4871]: I0128 15:19:37.903714 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:37 crc kubenswrapper[4871]: I0128 15:19:37.903764 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:37 crc kubenswrapper[4871]: E0128 15:19:37.903898 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:37 crc kubenswrapper[4871]: E0128 15:19:37.904087 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:38 crc kubenswrapper[4871]: I0128 15:19:38.903017 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:19:38 crc kubenswrapper[4871]: I0128 15:19:38.903042 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:38 crc kubenswrapper[4871]: E0128 15:19:38.905108 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jp46k" podUID="64aa044d-1eb6-4e5f-9c12-96ba346374fa" Jan 28 15:19:38 crc kubenswrapper[4871]: E0128 15:19:38.905267 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:39 crc kubenswrapper[4871]: I0128 15:19:39.903361 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:39 crc kubenswrapper[4871]: I0128 15:19:39.903377 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:39 crc kubenswrapper[4871]: I0128 15:19:39.905803 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 15:19:39 crc kubenswrapper[4871]: I0128 15:19:39.905876 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 15:19:39 crc kubenswrapper[4871]: I0128 15:19:39.906057 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 15:19:39 crc kubenswrapper[4871]: I0128 15:19:39.906161 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 15:19:40 crc kubenswrapper[4871]: I0128 15:19:40.903826 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:40 crc kubenswrapper[4871]: I0128 15:19:40.903841 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:19:40 crc kubenswrapper[4871]: I0128 15:19:40.906671 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 15:19:40 crc kubenswrapper[4871]: I0128 15:19:40.907851 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.452778 4871 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.529278 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9qxf4"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.530116 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9qxf4" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.535969 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.536112 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.537946 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.540300 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.540955 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.545131 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.545220 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.545258 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.545283 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.545319 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.545677 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.548306 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.548332 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.550084 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-q48tf"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.550818 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-q48tf" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.552128 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.552964 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xpk8s"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.553779 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5ffkk"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.553821 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xpk8s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.555021 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5ffkk" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.563057 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.563484 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.563530 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.563743 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.563944 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.563960 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.564037 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.564109 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.564299 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.564471 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tglzn"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.565337 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-tglzn" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.565630 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vprhz"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.575661 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.578200 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.582860 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.583724 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.589948 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.595890 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dzwqq"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.596406 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.596652 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.597531 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.597809 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.598044 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.598215 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.598391 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.598574 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.598738 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.598792 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.599012 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.601148 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qnhwk"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.601822 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qnhwk" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.601978 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.604461 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2lxj6"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.605039 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2lxj6" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.613916 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.614161 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.614280 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.614390 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.614710 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.614887 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.614998 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.615108 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.615244 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.615384 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.615542 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.615656 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.615674 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.615932 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v4rcn"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.616405 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4p8f\" (UniqueName: \"kubernetes.io/projected/736c08e7-ec73-4045-808e-dc00a1cdf894-kube-api-access-w4p8f\") pod \"apiserver-7bbb656c7d-jtc4s\" (UID: \"736c08e7-ec73-4045-808e-dc00a1cdf894\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.616440 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/736c08e7-ec73-4045-808e-dc00a1cdf894-etcd-client\") pod \"apiserver-7bbb656c7d-jtc4s\" (UID: \"736c08e7-ec73-4045-808e-dc00a1cdf894\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.616459 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/736c08e7-ec73-4045-808e-dc00a1cdf894-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jtc4s\" (UID: \"736c08e7-ec73-4045-808e-dc00a1cdf894\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.616491 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/736c08e7-ec73-4045-808e-dc00a1cdf894-encryption-config\") pod \"apiserver-7bbb656c7d-jtc4s\" (UID: \"736c08e7-ec73-4045-808e-dc00a1cdf894\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.616507 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/736c08e7-ec73-4045-808e-dc00a1cdf894-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jtc4s\" (UID: \"736c08e7-ec73-4045-808e-dc00a1cdf894\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.616527 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pqrb\" (UniqueName: \"kubernetes.io/projected/95abe15d-d903-4744-87a3-61e27e8bb7e8-kube-api-access-8pqrb\") pod \"migrator-59844c95c7-9qxf4\" (UID: \"95abe15d-d903-4744-87a3-61e27e8bb7e8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9qxf4" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.616549 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/736c08e7-ec73-4045-808e-dc00a1cdf894-audit-policies\") pod \"apiserver-7bbb656c7d-jtc4s\" (UID: \"736c08e7-ec73-4045-808e-dc00a1cdf894\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.616568 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/736c08e7-ec73-4045-808e-dc00a1cdf894-audit-dir\") pod \"apiserver-7bbb656c7d-jtc4s\" (UID: \"736c08e7-ec73-4045-808e-dc00a1cdf894\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.616584 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c08e7-ec73-4045-808e-dc00a1cdf894-serving-cert\") pod \"apiserver-7bbb656c7d-jtc4s\" (UID: \"736c08e7-ec73-4045-808e-dc00a1cdf894\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.616660 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-c7255"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.617042 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-c7255" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.617432 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v4rcn" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.619181 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vkb2v"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.619508 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.619969 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jn4dg"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.620328 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.620574 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.620621 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vkb2v" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.621642 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-xv6dw"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.622158 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xv6dw" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.623790 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bh7lg"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.624272 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.628648 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-p4lhv"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.629123 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-l2q84"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.629258 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-p4lhv" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.629459 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-l2q84" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.633065 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d2f7r"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.633704 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lqzgh"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.634312 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqczj"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.634878 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vxr9x"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.635460 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqczj" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.635504 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ql229"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.635830 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vxr9x" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.635953 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ql229" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.636236 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d2f7r" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.636565 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lqzgh" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.637349 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzvt5"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.638314 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-9q8n5"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.638648 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzvt5" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.639062 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-9q8n5" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.650892 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mq867"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.651612 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2j6j6"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.652484 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mq867" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.653660 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493555-5cqh5"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.700774 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2j6j6" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.701375 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.703091 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.703809 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.703825 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.709164 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.743564 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jhdgw"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.743924 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-4qvnc"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.744026 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jhdgw" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.744091 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.744269 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-5cqh5" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.744400 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.744534 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.744703 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.744795 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-client-ca\") pod \"controller-manager-879f6c89f-bh7lg\" (UID: \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.744834 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2828ed17-56e2-4da0-8e37-2b366d02fbae-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qnhwk\" (UID: \"2828ed17-56e2-4da0-8e37-2b366d02fbae\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qnhwk" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.744876 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g4hz\" (UniqueName: \"kubernetes.io/projected/2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce-kube-api-access-2g4hz\") pod \"console-operator-58897d9998-l2q84\" (UID: \"2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce\") " pod="openshift-console-operator/console-operator-58897d9998-l2q84" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.744905 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/736c08e7-ec73-4045-808e-dc00a1cdf894-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jtc4s\" (UID: \"736c08e7-ec73-4045-808e-dc00a1cdf894\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.744935 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zthwv"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.745174 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4qvnc" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.745257 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-zdfpg"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.745662 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.745708 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zdfpg" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.745710 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpb9p\" (UniqueName: \"kubernetes.io/projected/48a7be4a-2d1b-4b46-a720-4068e3fad906-kube-api-access-gpb9p\") pod \"marketplace-operator-79b997595-ql229\" (UID: \"48a7be4a-2d1b-4b46-a720-4068e3fad906\") " pod="openshift-marketplace/marketplace-operator-79b997595-ql229" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.745764 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2828ed17-56e2-4da0-8e37-2b366d02fbae-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qnhwk\" (UID: \"2828ed17-56e2-4da0-8e37-2b366d02fbae\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qnhwk" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.745778 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-j85fr"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.745811 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-zthwv" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.745811 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ad8c2aba-da78-44df-a60b-40de6f250df9-signing-cabundle\") pod \"service-ca-9c57cc56f-vxr9x\" (UID: \"ad8c2aba-da78-44df-a60b-40de6f250df9\") " pod="openshift-service-ca/service-ca-9c57cc56f-vxr9x" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.745962 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pqrb\" (UniqueName: \"kubernetes.io/projected/95abe15d-d903-4744-87a3-61e27e8bb7e8-kube-api-access-8pqrb\") pod \"migrator-59844c95c7-9qxf4\" (UID: \"95abe15d-d903-4744-87a3-61e27e8bb7e8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9qxf4" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.745995 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dvph\" (UniqueName: \"kubernetes.io/projected/e6a80b6c-f05d-400b-adb2-8a1637983435-kube-api-access-4dvph\") pod \"etcd-operator-b45778765-c7255\" (UID: \"e6a80b6c-f05d-400b-adb2-8a1637983435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c7255" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.746030 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/736c08e7-ec73-4045-808e-dc00a1cdf894-audit-policies\") pod \"apiserver-7bbb656c7d-jtc4s\" (UID: \"736c08e7-ec73-4045-808e-dc00a1cdf894\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.746056 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75f1faad-49ec-4a72-944b-8857e0752a8c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2lxj6\" (UID: \"75f1faad-49ec-4a72-944b-8857e0752a8c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2lxj6" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.746090 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-serving-cert\") pod \"controller-manager-879f6c89f-bh7lg\" (UID: \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.746122 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c08e7-ec73-4045-808e-dc00a1cdf894-serving-cert\") pod \"apiserver-7bbb656c7d-jtc4s\" (UID: \"736c08e7-ec73-4045-808e-dc00a1cdf894\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.746145 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6-config\") pod \"route-controller-manager-6576b87f9c-q676l\" (UID: \"1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.746151 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/736c08e7-ec73-4045-808e-dc00a1cdf894-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jtc4s\" (UID: \"736c08e7-ec73-4045-808e-dc00a1cdf894\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.746166 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dc34e29b-b869-467b-85a6-aac06d35be0f-metrics-tls\") pod \"dns-operator-744455d44c-q48tf\" (UID: \"dc34e29b-b869-467b-85a6-aac06d35be0f\") " pod="openshift-dns-operator/dns-operator-744455d44c-q48tf" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.746232 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48a7be4a-2d1b-4b46-a720-4068e3fad906-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ql229\" (UID: \"48a7be4a-2d1b-4b46-a720-4068e3fad906\") " pod="openshift-marketplace/marketplace-operator-79b997595-ql229" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.746274 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6a80b6c-f05d-400b-adb2-8a1637983435-serving-cert\") pod \"etcd-operator-b45778765-c7255\" (UID: \"e6a80b6c-f05d-400b-adb2-8a1637983435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c7255" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.746300 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce-config\") pod \"console-operator-58897d9998-l2q84\" (UID: \"2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce\") " pod="openshift-console-operator/console-operator-58897d9998-l2q84" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.746324 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a80b6c-f05d-400b-adb2-8a1637983435-config\") pod \"etcd-operator-b45778765-c7255\" (UID: \"e6a80b6c-f05d-400b-adb2-8a1637983435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c7255" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.746346 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce-trusted-ca\") pod \"console-operator-58897d9998-l2q84\" (UID: \"2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce\") " pod="openshift-console-operator/console-operator-58897d9998-l2q84" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.746657 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6-serving-cert\") pod \"route-controller-manager-6576b87f9c-q676l\" (UID: \"1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.746682 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ad8c2aba-da78-44df-a60b-40de6f250df9-signing-key\") pod \"service-ca-9c57cc56f-vxr9x\" (UID: \"ad8c2aba-da78-44df-a60b-40de6f250df9\") " pod="openshift-service-ca/service-ca-9c57cc56f-vxr9x" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.746708 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e6a80b6c-f05d-400b-adb2-8a1637983435-etcd-service-ca\") pod \"etcd-operator-b45778765-c7255\" (UID: \"e6a80b6c-f05d-400b-adb2-8a1637983435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c7255" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.746735 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/736c08e7-ec73-4045-808e-dc00a1cdf894-etcd-client\") pod \"apiserver-7bbb656c7d-jtc4s\" (UID: \"736c08e7-ec73-4045-808e-dc00a1cdf894\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.746766 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.746807 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/736c08e7-ec73-4045-808e-dc00a1cdf894-encryption-config\") pod \"apiserver-7bbb656c7d-jtc4s\" (UID: \"736c08e7-ec73-4045-808e-dc00a1cdf894\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.746874 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/736c08e7-ec73-4045-808e-dc00a1cdf894-audit-policies\") pod \"apiserver-7bbb656c7d-jtc4s\" (UID: \"736c08e7-ec73-4045-808e-dc00a1cdf894\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.747050 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp7nf\" (UniqueName: \"kubernetes.io/projected/3e91d969-6cd9-40bd-afc2-7861502f0073-kube-api-access-xp7nf\") pod \"machine-config-controller-84d6567774-5ffkk\" (UID: \"3e91d969-6cd9-40bd-afc2-7861502f0073\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5ffkk" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.747124 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6595\" (UniqueName: \"kubernetes.io/projected/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6-kube-api-access-w6595\") pod \"route-controller-manager-6576b87f9c-q676l\" (UID: \"1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.747153 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2828ed17-56e2-4da0-8e37-2b366d02fbae-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qnhwk\" (UID: \"2828ed17-56e2-4da0-8e37-2b366d02fbae\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qnhwk" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.747228 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3e91d969-6cd9-40bd-afc2-7861502f0073-proxy-tls\") pod \"machine-config-controller-84d6567774-5ffkk\" (UID: \"3e91d969-6cd9-40bd-afc2-7861502f0073\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5ffkk" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.747252 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48a7be4a-2d1b-4b46-a720-4068e3fad906-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ql229\" (UID: \"48a7be4a-2d1b-4b46-a720-4068e3fad906\") " pod="openshift-marketplace/marketplace-operator-79b997595-ql229" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.747311 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6-client-ca\") pod \"route-controller-manager-6576b87f9c-q676l\" (UID: \"1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.747336 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e6a80b6c-f05d-400b-adb2-8a1637983435-etcd-ca\") pod \"etcd-operator-b45778765-c7255\" (UID: \"e6a80b6c-f05d-400b-adb2-8a1637983435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c7255" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.747384 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hbfw\" (UniqueName: \"kubernetes.io/projected/dc34e29b-b869-467b-85a6-aac06d35be0f-kube-api-access-6hbfw\") pod \"dns-operator-744455d44c-q48tf\" (UID: \"dc34e29b-b869-467b-85a6-aac06d35be0f\") " pod="openshift-dns-operator/dns-operator-744455d44c-q48tf" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.747406 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3e91d969-6cd9-40bd-afc2-7861502f0073-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5ffkk\" (UID: \"3e91d969-6cd9-40bd-afc2-7861502f0073\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5ffkk" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.747457 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4cq8\" (UniqueName: \"kubernetes.io/projected/ad8c2aba-da78-44df-a60b-40de6f250df9-kube-api-access-t4cq8\") pod \"service-ca-9c57cc56f-vxr9x\" (UID: \"ad8c2aba-da78-44df-a60b-40de6f250df9\") " pod="openshift-service-ca/service-ca-9c57cc56f-vxr9x" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.747078 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.747483 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bh7lg\" (UID: \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.747499 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.747543 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs2wf\" (UniqueName: \"kubernetes.io/projected/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-kube-api-access-fs2wf\") pod \"controller-manager-879f6c89f-bh7lg\" (UID: \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.747387 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.747632 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/736c08e7-ec73-4045-808e-dc00a1cdf894-audit-dir\") pod \"apiserver-7bbb656c7d-jtc4s\" (UID: \"736c08e7-ec73-4045-808e-dc00a1cdf894\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.747443 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.747749 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.747817 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.747890 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.747942 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e6a80b6c-f05d-400b-adb2-8a1637983435-etcd-client\") pod \"etcd-operator-b45778765-c7255\" (UID: \"e6a80b6c-f05d-400b-adb2-8a1637983435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c7255" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.747988 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/736c08e7-ec73-4045-808e-dc00a1cdf894-audit-dir\") pod \"apiserver-7bbb656c7d-jtc4s\" (UID: \"736c08e7-ec73-4045-808e-dc00a1cdf894\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.747997 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.748061 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75f1faad-49ec-4a72-944b-8857e0752a8c-config\") pod \"kube-apiserver-operator-766d6c64bb-2lxj6\" (UID: \"75f1faad-49ec-4a72-944b-8857e0752a8c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2lxj6" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.748081 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.748123 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4p8f\" (UniqueName: \"kubernetes.io/projected/736c08e7-ec73-4045-808e-dc00a1cdf894-kube-api-access-w4p8f\") pod \"apiserver-7bbb656c7d-jtc4s\" (UID: \"736c08e7-ec73-4045-808e-dc00a1cdf894\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.748163 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-config\") pod \"controller-manager-879f6c89f-bh7lg\" (UID: \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.748239 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.748279 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/736c08e7-ec73-4045-808e-dc00a1cdf894-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jtc4s\" (UID: \"736c08e7-ec73-4045-808e-dc00a1cdf894\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.748311 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75f1faad-49ec-4a72-944b-8857e0752a8c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2lxj6\" (UID: \"75f1faad-49ec-4a72-944b-8857e0752a8c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2lxj6" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.748330 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce-serving-cert\") pod \"console-operator-58897d9998-l2q84\" (UID: \"2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce\") " pod="openshift-console-operator/console-operator-58897d9998-l2q84" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.748352 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.748455 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.748557 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.748758 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/736c08e7-ec73-4045-808e-dc00a1cdf894-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jtc4s\" (UID: \"736c08e7-ec73-4045-808e-dc00a1cdf894\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.751498 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cz8j"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.752197 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.752356 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.752561 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/736c08e7-ec73-4045-808e-dc00a1cdf894-encryption-config\") pod \"apiserver-7bbb656c7d-jtc4s\" (UID: \"736c08e7-ec73-4045-808e-dc00a1cdf894\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.752892 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-p4rlg"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.752992 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/736c08e7-ec73-4045-808e-dc00a1cdf894-etcd-client\") pod \"apiserver-7bbb656c7d-jtc4s\" (UID: \"736c08e7-ec73-4045-808e-dc00a1cdf894\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.753211 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cz8j" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.754030 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4rlg" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.762076 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c08e7-ec73-4045-808e-dc00a1cdf894-serving-cert\") pod \"apiserver-7bbb656c7d-jtc4s\" (UID: \"736c08e7-ec73-4045-808e-dc00a1cdf894\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.773463 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.773775 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.774040 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.774234 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.774574 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.776110 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.778231 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.778553 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.778926 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.779067 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.779237 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.779386 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.781971 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.782158 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.782289 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.782548 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.782672 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.782744 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.782825 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.782869 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.782882 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.782935 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.782982 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.783040 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.783062 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.783101 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.783040 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.782879 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.789817 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.789988 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.792387 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.794849 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.795474 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.795933 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.796320 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wgnps"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.796960 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wgnps" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.798932 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.801395 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.810799 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9qxf4"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.812866 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8xpb5"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.821922 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8xpb5" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.826000 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.827106 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.830540 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-q48tf"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.834125 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.837762 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d2f7r"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.840472 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vkb2v"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.842295 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vxr9x"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.844793 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v4rcn"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.846506 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-xv6dw"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.847727 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-p4lhv"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849256 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g4hz\" (UniqueName: \"kubernetes.io/projected/2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce-kube-api-access-2g4hz\") pod \"console-operator-58897d9998-l2q84\" (UID: \"2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce\") " pod="openshift-console-operator/console-operator-58897d9998-l2q84" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849312 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpb9p\" (UniqueName: \"kubernetes.io/projected/48a7be4a-2d1b-4b46-a720-4068e3fad906-kube-api-access-gpb9p\") pod \"marketplace-operator-79b997595-ql229\" (UID: \"48a7be4a-2d1b-4b46-a720-4068e3fad906\") " pod="openshift-marketplace/marketplace-operator-79b997595-ql229" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849346 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2828ed17-56e2-4da0-8e37-2b366d02fbae-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qnhwk\" (UID: \"2828ed17-56e2-4da0-8e37-2b366d02fbae\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qnhwk" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849372 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ad8c2aba-da78-44df-a60b-40de6f250df9-signing-cabundle\") pod \"service-ca-9c57cc56f-vxr9x\" (UID: \"ad8c2aba-da78-44df-a60b-40de6f250df9\") " pod="openshift-service-ca/service-ca-9c57cc56f-vxr9x" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849406 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dvph\" (UniqueName: \"kubernetes.io/projected/e6a80b6c-f05d-400b-adb2-8a1637983435-kube-api-access-4dvph\") pod \"etcd-operator-b45778765-c7255\" (UID: \"e6a80b6c-f05d-400b-adb2-8a1637983435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c7255" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849430 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75f1faad-49ec-4a72-944b-8857e0752a8c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2lxj6\" (UID: \"75f1faad-49ec-4a72-944b-8857e0752a8c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2lxj6" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849453 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-serving-cert\") pod \"controller-manager-879f6c89f-bh7lg\" (UID: \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849480 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48a7be4a-2d1b-4b46-a720-4068e3fad906-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ql229\" (UID: \"48a7be4a-2d1b-4b46-a720-4068e3fad906\") " pod="openshift-marketplace/marketplace-operator-79b997595-ql229" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849506 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6-config\") pod \"route-controller-manager-6576b87f9c-q676l\" (UID: \"1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849533 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dc34e29b-b869-467b-85a6-aac06d35be0f-metrics-tls\") pod \"dns-operator-744455d44c-q48tf\" (UID: \"dc34e29b-b869-467b-85a6-aac06d35be0f\") " pod="openshift-dns-operator/dns-operator-744455d44c-q48tf" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849555 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6a80b6c-f05d-400b-adb2-8a1637983435-serving-cert\") pod \"etcd-operator-b45778765-c7255\" (UID: \"e6a80b6c-f05d-400b-adb2-8a1637983435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c7255" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849578 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce-config\") pod \"console-operator-58897d9998-l2q84\" (UID: \"2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce\") " pod="openshift-console-operator/console-operator-58897d9998-l2q84" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849655 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6-serving-cert\") pod \"route-controller-manager-6576b87f9c-q676l\" (UID: \"1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849679 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a80b6c-f05d-400b-adb2-8a1637983435-config\") pod \"etcd-operator-b45778765-c7255\" (UID: \"e6a80b6c-f05d-400b-adb2-8a1637983435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c7255" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849701 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce-trusted-ca\") pod \"console-operator-58897d9998-l2q84\" (UID: \"2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce\") " pod="openshift-console-operator/console-operator-58897d9998-l2q84" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849727 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e6a80b6c-f05d-400b-adb2-8a1637983435-etcd-service-ca\") pod \"etcd-operator-b45778765-c7255\" (UID: \"e6a80b6c-f05d-400b-adb2-8a1637983435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c7255" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849748 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ad8c2aba-da78-44df-a60b-40de6f250df9-signing-key\") pod \"service-ca-9c57cc56f-vxr9x\" (UID: \"ad8c2aba-da78-44df-a60b-40de6f250df9\") " pod="openshift-service-ca/service-ca-9c57cc56f-vxr9x" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849783 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp7nf\" (UniqueName: \"kubernetes.io/projected/3e91d969-6cd9-40bd-afc2-7861502f0073-kube-api-access-xp7nf\") pod \"machine-config-controller-84d6567774-5ffkk\" (UID: \"3e91d969-6cd9-40bd-afc2-7861502f0073\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5ffkk" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849811 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48a7be4a-2d1b-4b46-a720-4068e3fad906-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ql229\" (UID: \"48a7be4a-2d1b-4b46-a720-4068e3fad906\") " pod="openshift-marketplace/marketplace-operator-79b997595-ql229" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849834 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6-client-ca\") pod \"route-controller-manager-6576b87f9c-q676l\" (UID: \"1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849856 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6595\" (UniqueName: \"kubernetes.io/projected/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6-kube-api-access-w6595\") pod \"route-controller-manager-6576b87f9c-q676l\" (UID: \"1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849879 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2828ed17-56e2-4da0-8e37-2b366d02fbae-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qnhwk\" (UID: \"2828ed17-56e2-4da0-8e37-2b366d02fbae\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qnhwk" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849899 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3e91d969-6cd9-40bd-afc2-7861502f0073-proxy-tls\") pod \"machine-config-controller-84d6567774-5ffkk\" (UID: \"3e91d969-6cd9-40bd-afc2-7861502f0073\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5ffkk" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849922 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e6a80b6c-f05d-400b-adb2-8a1637983435-etcd-ca\") pod \"etcd-operator-b45778765-c7255\" (UID: \"e6a80b6c-f05d-400b-adb2-8a1637983435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c7255" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849944 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hbfw\" (UniqueName: \"kubernetes.io/projected/dc34e29b-b869-467b-85a6-aac06d35be0f-kube-api-access-6hbfw\") pod \"dns-operator-744455d44c-q48tf\" (UID: \"dc34e29b-b869-467b-85a6-aac06d35be0f\") " pod="openshift-dns-operator/dns-operator-744455d44c-q48tf" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849969 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3e91d969-6cd9-40bd-afc2-7861502f0073-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5ffkk\" (UID: \"3e91d969-6cd9-40bd-afc2-7861502f0073\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5ffkk" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.849992 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4cq8\" (UniqueName: \"kubernetes.io/projected/ad8c2aba-da78-44df-a60b-40de6f250df9-kube-api-access-t4cq8\") pod \"service-ca-9c57cc56f-vxr9x\" (UID: \"ad8c2aba-da78-44df-a60b-40de6f250df9\") " pod="openshift-service-ca/service-ca-9c57cc56f-vxr9x" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.850031 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bh7lg\" (UID: \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.850060 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs2wf\" (UniqueName: \"kubernetes.io/projected/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-kube-api-access-fs2wf\") pod \"controller-manager-879f6c89f-bh7lg\" (UID: \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.850093 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e6a80b6c-f05d-400b-adb2-8a1637983435-etcd-client\") pod \"etcd-operator-b45778765-c7255\" (UID: \"e6a80b6c-f05d-400b-adb2-8a1637983435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c7255" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.850120 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75f1faad-49ec-4a72-944b-8857e0752a8c-config\") pod \"kube-apiserver-operator-766d6c64bb-2lxj6\" (UID: \"75f1faad-49ec-4a72-944b-8857e0752a8c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2lxj6" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.850164 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-config\") pod \"controller-manager-879f6c89f-bh7lg\" (UID: \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.850186 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75f1faad-49ec-4a72-944b-8857e0752a8c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2lxj6\" (UID: \"75f1faad-49ec-4a72-944b-8857e0752a8c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2lxj6" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.850209 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce-serving-cert\") pod \"console-operator-58897d9998-l2q84\" (UID: \"2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce\") " pod="openshift-console-operator/console-operator-58897d9998-l2q84" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.850231 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-client-ca\") pod \"controller-manager-879f6c89f-bh7lg\" (UID: \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.850269 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2828ed17-56e2-4da0-8e37-2b366d02fbae-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qnhwk\" (UID: \"2828ed17-56e2-4da0-8e37-2b366d02fbae\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qnhwk" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.851277 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6-config\") pod \"route-controller-manager-6576b87f9c-q676l\" (UID: \"1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.852002 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75f1faad-49ec-4a72-944b-8857e0752a8c-config\") pod \"kube-apiserver-operator-766d6c64bb-2lxj6\" (UID: \"75f1faad-49ec-4a72-944b-8857e0752a8c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2lxj6" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.852262 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e6a80b6c-f05d-400b-adb2-8a1637983435-etcd-service-ca\") pod \"etcd-operator-b45778765-c7255\" (UID: \"e6a80b6c-f05d-400b-adb2-8a1637983435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c7255" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.852365 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.852831 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-config\") pod \"controller-manager-879f6c89f-bh7lg\" (UID: \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.852367 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e6a80b6c-f05d-400b-adb2-8a1637983435-etcd-ca\") pod \"etcd-operator-b45778765-c7255\" (UID: \"e6a80b6c-f05d-400b-adb2-8a1637983435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c7255" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.853214 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-qxtnb"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.853370 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3e91d969-6cd9-40bd-afc2-7861502f0073-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5ffkk\" (UID: \"3e91d969-6cd9-40bd-afc2-7861502f0073\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5ffkk" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.854004 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qxtnb" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.854597 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dc34e29b-b869-467b-85a6-aac06d35be0f-metrics-tls\") pod \"dns-operator-744455d44c-q48tf\" (UID: \"dc34e29b-b869-467b-85a6-aac06d35be0f\") " pod="openshift-dns-operator/dns-operator-744455d44c-q48tf" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.854927 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3e91d969-6cd9-40bd-afc2-7861502f0073-proxy-tls\") pod \"machine-config-controller-84d6567774-5ffkk\" (UID: \"3e91d969-6cd9-40bd-afc2-7861502f0073\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5ffkk" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.855313 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6a80b6c-f05d-400b-adb2-8a1637983435-serving-cert\") pod \"etcd-operator-b45778765-c7255\" (UID: \"e6a80b6c-f05d-400b-adb2-8a1637983435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c7255" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.855426 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e6a80b6c-f05d-400b-adb2-8a1637983435-etcd-client\") pod \"etcd-operator-b45778765-c7255\" (UID: \"e6a80b6c-f05d-400b-adb2-8a1637983435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c7255" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.855628 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-client-ca\") pod \"controller-manager-879f6c89f-bh7lg\" (UID: \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.855834 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75f1faad-49ec-4a72-944b-8857e0752a8c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2lxj6\" (UID: \"75f1faad-49ec-4a72-944b-8857e0752a8c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2lxj6" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.856120 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a80b6c-f05d-400b-adb2-8a1637983435-config\") pod \"etcd-operator-b45778765-c7255\" (UID: \"e6a80b6c-f05d-400b-adb2-8a1637983435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c7255" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.856216 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6-client-ca\") pod \"route-controller-manager-6576b87f9c-q676l\" (UID: \"1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.856325 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tglzn"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.856518 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce-config\") pod \"console-operator-58897d9998-l2q84\" (UID: \"2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce\") " pod="openshift-console-operator/console-operator-58897d9998-l2q84" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.856543 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce-trusted-ca\") pod \"console-operator-58897d9998-l2q84\" (UID: \"2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce\") " pod="openshift-console-operator/console-operator-58897d9998-l2q84" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.857080 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bh7lg\" (UID: \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.857292 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2828ed17-56e2-4da0-8e37-2b366d02fbae-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qnhwk\" (UID: \"2828ed17-56e2-4da0-8e37-2b366d02fbae\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qnhwk" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.857444 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-serving-cert\") pod \"controller-manager-879f6c89f-bh7lg\" (UID: \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.857729 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mq867"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.859074 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vprhz"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.859880 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ad8c2aba-da78-44df-a60b-40de6f250df9-signing-key\") pod \"service-ca-9c57cc56f-vxr9x\" (UID: \"ad8c2aba-da78-44df-a60b-40de6f250df9\") " pod="openshift-service-ca/service-ca-9c57cc56f-vxr9x" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.860263 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6-serving-cert\") pod \"route-controller-manager-6576b87f9c-q676l\" (UID: \"1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.860319 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5ffkk"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.861338 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2828ed17-56e2-4da0-8e37-2b366d02fbae-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qnhwk\" (UID: \"2828ed17-56e2-4da0-8e37-2b366d02fbae\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qnhwk" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.861400 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qnhwk"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.861537 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce-serving-cert\") pod \"console-operator-58897d9998-l2q84\" (UID: \"2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce\") " pod="openshift-console-operator/console-operator-58897d9998-l2q84" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.862694 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xpk8s"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.863705 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lqzgh"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.866847 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jn4dg"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.868839 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dzwqq"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.871752 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zdfpg"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.872250 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.875600 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.879497 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bh7lg"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.880706 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ad8c2aba-da78-44df-a60b-40de6f250df9-signing-cabundle\") pod \"service-ca-9c57cc56f-vxr9x\" (UID: \"ad8c2aba-da78-44df-a60b-40de6f250df9\") " pod="openshift-service-ca/service-ca-9c57cc56f-vxr9x" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.881988 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zthwv"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.883223 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-c7255"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.884531 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ql229"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.885980 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-fvzpr"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.889211 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-j85fr"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.889250 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqczj"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.889266 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-l2q84"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.889365 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.890265 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzvt5"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.891331 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2lxj6"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.891531 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.892284 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cz8j"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.893335 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2j6j6"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.894325 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-p4rlg"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.895315 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jhdgw"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.896264 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493555-5cqh5"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.897236 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qxtnb"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.898206 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-cc8lr"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.898914 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cc8lr" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.899221 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-fvzpr"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.900207 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cc8lr"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.901199 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8xpb5"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.902205 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wgnps"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.903237 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-pnb92"] Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.903910 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pnb92" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.912429 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.932622 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.958155 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.961027 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48a7be4a-2d1b-4b46-a720-4068e3fad906-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ql229\" (UID: \"48a7be4a-2d1b-4b46-a720-4068e3fad906\") " pod="openshift-marketplace/marketplace-operator-79b997595-ql229" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.982081 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 15:19:47 crc kubenswrapper[4871]: I0128 15:19:47.992548 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.000040 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48a7be4a-2d1b-4b46-a720-4068e3fad906-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ql229\" (UID: \"48a7be4a-2d1b-4b46-a720-4068e3fad906\") " pod="openshift-marketplace/marketplace-operator-79b997595-ql229" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.012710 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.032494 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.052833 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.072999 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.093059 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.112901 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.132614 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.153325 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.173712 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.193811 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.213042 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.233290 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.253365 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.274148 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.294012 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.313065 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.332324 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.352237 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.373286 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.394409 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.433337 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.453611 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.474727 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.492990 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.514323 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.532578 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.564094 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.572248 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.593444 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.612153 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.634497 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.652659 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.673396 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.693186 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.712497 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.733134 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.751382 4871 request.go:700] Waited for 1.00598302s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.754149 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.772522 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.792188 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.812906 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.833004 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.852351 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.872383 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.901690 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.912838 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.935163 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.953834 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.974457 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 15:19:48 crc kubenswrapper[4871]: I0128 15:19:48.992447 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.012732 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.047540 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pqrb\" (UniqueName: \"kubernetes.io/projected/95abe15d-d903-4744-87a3-61e27e8bb7e8-kube-api-access-8pqrb\") pod \"migrator-59844c95c7-9qxf4\" (UID: \"95abe15d-d903-4744-87a3-61e27e8bb7e8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9qxf4" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.057152 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.073002 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.093424 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.112658 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.134038 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.153534 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.188748 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.210201 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4p8f\" (UniqueName: \"kubernetes.io/projected/736c08e7-ec73-4045-808e-dc00a1cdf894-kube-api-access-w4p8f\") pod \"apiserver-7bbb656c7d-jtc4s\" (UID: \"736c08e7-ec73-4045-808e-dc00a1cdf894\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.233069 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.252359 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.273257 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.293621 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.312712 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.333497 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.344295 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9qxf4" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.354243 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.358709 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.375189 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.425412 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpb9p\" (UniqueName: \"kubernetes.io/projected/48a7be4a-2d1b-4b46-a720-4068e3fad906-kube-api-access-gpb9p\") pod \"marketplace-operator-79b997595-ql229\" (UID: \"48a7be4a-2d1b-4b46-a720-4068e3fad906\") " pod="openshift-marketplace/marketplace-operator-79b997595-ql229" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.440504 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g4hz\" (UniqueName: \"kubernetes.io/projected/2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce-kube-api-access-2g4hz\") pod \"console-operator-58897d9998-l2q84\" (UID: \"2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce\") " pod="openshift-console-operator/console-operator-58897d9998-l2q84" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.453389 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dvph\" (UniqueName: \"kubernetes.io/projected/e6a80b6c-f05d-400b-adb2-8a1637983435-kube-api-access-4dvph\") pod \"etcd-operator-b45778765-c7255\" (UID: \"e6a80b6c-f05d-400b-adb2-8a1637983435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c7255" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.455159 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ql229" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.478199 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2828ed17-56e2-4da0-8e37-2b366d02fbae-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qnhwk\" (UID: \"2828ed17-56e2-4da0-8e37-2b366d02fbae\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qnhwk" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.496056 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75f1faad-49ec-4a72-944b-8857e0752a8c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2lxj6\" (UID: \"75f1faad-49ec-4a72-944b-8857e0752a8c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2lxj6" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.508768 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs2wf\" (UniqueName: \"kubernetes.io/projected/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-kube-api-access-fs2wf\") pod \"controller-manager-879f6c89f-bh7lg\" (UID: \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.546131 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hbfw\" (UniqueName: \"kubernetes.io/projected/dc34e29b-b869-467b-85a6-aac06d35be0f-kube-api-access-6hbfw\") pod \"dns-operator-744455d44c-q48tf\" (UID: \"dc34e29b-b869-467b-85a6-aac06d35be0f\") " pod="openshift-dns-operator/dns-operator-744455d44c-q48tf" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.550578 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4cq8\" (UniqueName: \"kubernetes.io/projected/ad8c2aba-da78-44df-a60b-40de6f250df9-kube-api-access-t4cq8\") pod \"service-ca-9c57cc56f-vxr9x\" (UID: \"ad8c2aba-da78-44df-a60b-40de6f250df9\") " pod="openshift-service-ca/service-ca-9c57cc56f-vxr9x" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.552324 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.573072 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.593339 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.609810 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qnhwk" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.626749 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6595\" (UniqueName: \"kubernetes.io/projected/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6-kube-api-access-w6595\") pod \"route-controller-manager-6576b87f9c-q676l\" (UID: \"1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.630017 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2lxj6" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.643334 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-c7255" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.647101 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp7nf\" (UniqueName: \"kubernetes.io/projected/3e91d969-6cd9-40bd-afc2-7861502f0073-kube-api-access-xp7nf\") pod \"machine-config-controller-84d6567774-5ffkk\" (UID: \"3e91d969-6cd9-40bd-afc2-7861502f0073\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5ffkk" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.655444 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s"] Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.655623 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 15:19:49 crc kubenswrapper[4871]: W0128 15:19:49.665981 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod736c08e7_ec73_4045_808e_dc00a1cdf894.slice/crio-819fbdfca2fafbe63a04c3d6ecdd734cd031cdcfaca441116ed9f919aca59c3b WatchSource:0}: Error finding container 819fbdfca2fafbe63a04c3d6ecdd734cd031cdcfaca441116ed9f919aca59c3b: Status 404 returned error can't find the container with id 819fbdfca2fafbe63a04c3d6ecdd734cd031cdcfaca441116ed9f919aca59c3b Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.672722 4871 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.685538 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-q48tf" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.691856 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ql229"] Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.693127 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.706217 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.714830 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.725319 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5ffkk" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.730740 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-l2q84" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.732394 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.748165 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vxr9x" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.752406 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.763992 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" event={"ID":"736c08e7-ec73-4045-808e-dc00a1cdf894","Type":"ContainerStarted","Data":"819fbdfca2fafbe63a04c3d6ecdd734cd031cdcfaca441116ed9f919aca59c3b"} Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.764547 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ql229" event={"ID":"48a7be4a-2d1b-4b46-a720-4068e3fad906","Type":"ContainerStarted","Data":"4a27aa4ea18e42781af15fd93078a981f24f22e9a64efaff0b9142eb1c38a363"} Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.770742 4871 request.go:700] Waited for 1.871568087s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.772384 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.790998 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9qxf4"] Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.793175 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.813484 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.817010 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qnhwk"] Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.834065 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.853848 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.873248 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.873655 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-etcd-serving-ca\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.873678 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb8kf\" (UniqueName: \"kubernetes.io/projected/4acd9e6a-53ba-41a6-9b34-0c93dd921150-kube-api-access-mb8kf\") pod \"machine-config-operator-74547568cd-xv6dw\" (UID: \"4acd9e6a-53ba-41a6-9b34-0c93dd921150\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xv6dw" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.873703 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc917f74-9ee2-4f96-baa1-9bc802c0d448-images\") pod \"machine-api-operator-5694c8668f-p4lhv\" (UID: \"dc917f74-9ee2-4f96-baa1-9bc802c0d448\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4lhv" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.873722 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/514e10d9-70a0-4a80-bd56-349522fad444-srv-cert\") pod \"catalog-operator-68c6474976-mzvt5\" (UID: \"514e10d9-70a0-4a80-bd56-349522fad444\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzvt5" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.873748 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.873766 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v8dl\" (UniqueName: \"kubernetes.io/projected/514e10d9-70a0-4a80-bd56-349522fad444-kube-api-access-6v8dl\") pod \"catalog-operator-68c6474976-mzvt5\" (UID: \"514e10d9-70a0-4a80-bd56-349522fad444\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzvt5" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.873782 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee25a7d4-5043-48d7-91d1-68f2af96109a-trusted-ca\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.873800 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jnjh\" (UniqueName: \"kubernetes.io/projected/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-kube-api-access-5jnjh\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.873818 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.873839 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/10252089-5c17-442d-bf25-f6aa45799274-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tglzn\" (UID: \"10252089-5c17-442d-bf25-f6aa45799274\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tglzn" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.873855 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-node-pullsecrets\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.873876 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91bad959-dee9-48f7-90c6-7d7462b8cf9f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-d2f7r\" (UID: \"91bad959-dee9-48f7-90c6-7d7462b8cf9f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d2f7r" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.873921 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ee25a7d4-5043-48d7-91d1-68f2af96109a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.873938 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w44gq\" (UniqueName: \"kubernetes.io/projected/ee25a7d4-5043-48d7-91d1-68f2af96109a-kube-api-access-w44gq\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.873956 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmsc2\" (UniqueName: \"kubernetes.io/projected/7128a8bf-85ad-45ed-bb6f-9efc037aca2b-kube-api-access-wmsc2\") pod \"openshift-config-operator-7777fb866f-vkb2v\" (UID: \"7128a8bf-85ad-45ed-bb6f-9efc037aca2b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vkb2v" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.873973 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4acd9e6a-53ba-41a6-9b34-0c93dd921150-proxy-tls\") pod \"machine-config-operator-74547568cd-xv6dw\" (UID: \"4acd9e6a-53ba-41a6-9b34-0c93dd921150\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xv6dw" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.873989 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d48b65d8-de38-4d1f-9162-3f16e3b8401b-stats-auth\") pod \"router-default-5444994796-9q8n5\" (UID: \"d48b65d8-de38-4d1f-9162-3f16e3b8401b\") " pod="openshift-ingress/router-default-5444994796-9q8n5" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.874017 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ee25a7d4-5043-48d7-91d1-68f2af96109a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.874033 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d48b65d8-de38-4d1f-9162-3f16e3b8401b-metrics-certs\") pod \"router-default-5444994796-9q8n5\" (UID: \"d48b65d8-de38-4d1f-9162-3f16e3b8401b\") " pod="openshift-ingress/router-default-5444994796-9q8n5" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.874054 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlpfh\" (UniqueName: \"kubernetes.io/projected/28dc3b19-c5e4-4de6-889a-043b95a5f0f2-kube-api-access-jlpfh\") pod \"control-plane-machine-set-operator-78cbb6b69f-lqzgh\" (UID: \"28dc3b19-c5e4-4de6-889a-043b95a5f0f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lqzgh" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.874071 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-etcd-client\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.874104 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.874123 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7128a8bf-85ad-45ed-bb6f-9efc037aca2b-serving-cert\") pod \"openshift-config-operator-7777fb866f-vkb2v\" (UID: \"7128a8bf-85ad-45ed-bb6f-9efc037aca2b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vkb2v" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.874140 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc917f74-9ee2-4f96-baa1-9bc802c0d448-config\") pod \"machine-api-operator-5694c8668f-p4lhv\" (UID: \"dc917f74-9ee2-4f96-baa1-9bc802c0d448\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4lhv" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.874753 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cth7n\" (UniqueName: \"kubernetes.io/projected/dc917f74-9ee2-4f96-baa1-9bc802c0d448-kube-api-access-cth7n\") pod \"machine-api-operator-5694c8668f-p4lhv\" (UID: \"dc917f74-9ee2-4f96-baa1-9bc802c0d448\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4lhv" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.874781 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7128a8bf-85ad-45ed-bb6f-9efc037aca2b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vkb2v\" (UID: \"7128a8bf-85ad-45ed-bb6f-9efc037aca2b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vkb2v" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.874804 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-config\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.874824 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.874841 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-audit-dir\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.874857 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d48b65d8-de38-4d1f-9162-3f16e3b8401b-service-ca-bundle\") pod \"router-default-5444994796-9q8n5\" (UID: \"d48b65d8-de38-4d1f-9162-3f16e3b8401b\") " pod="openshift-ingress/router-default-5444994796-9q8n5" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.874880 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e058d636-1ce7-480a-a798-378083e7edf9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gqczj\" (UID: \"e058d636-1ce7-480a-a798-378083e7edf9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqczj" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.875690 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/28dc3b19-c5e4-4de6-889a-043b95a5f0f2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-lqzgh\" (UID: \"28dc3b19-c5e4-4de6-889a-043b95a5f0f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lqzgh" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.875723 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca237fb9-41ce-45a3-bc60-7439787888da-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xpk8s\" (UID: \"ca237fb9-41ce-45a3-bc60-7439787888da\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xpk8s" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.875738 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-serving-cert\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.875753 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdxl9\" (UniqueName: \"kubernetes.io/projected/ca237fb9-41ce-45a3-bc60-7439787888da-kube-api-access-mdxl9\") pod \"openshift-controller-manager-operator-756b6f6bc6-xpk8s\" (UID: \"ca237fb9-41ce-45a3-bc60-7439787888da\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xpk8s" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.875772 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xswvb\" (UniqueName: \"kubernetes.io/projected/80a59a1a-e3b3-4e09-8be9-275fe4d95dff-kube-api-access-xswvb\") pod \"cluster-samples-operator-665b6dd947-v4rcn\" (UID: \"80a59a1a-e3b3-4e09-8be9-275fe4d95dff\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v4rcn" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.875789 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e058d636-1ce7-480a-a798-378083e7edf9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gqczj\" (UID: \"e058d636-1ce7-480a-a798-378083e7edf9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqczj" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.875804 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-audit\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.875819 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-audit-policies\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.875835 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91bad959-dee9-48f7-90c6-7d7462b8cf9f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-d2f7r\" (UID: \"91bad959-dee9-48f7-90c6-7d7462b8cf9f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d2f7r" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.875870 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.875886 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.875901 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.875917 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.875942 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d48b65d8-de38-4d1f-9162-3f16e3b8401b-default-certificate\") pod \"router-default-5444994796-9q8n5\" (UID: \"d48b65d8-de38-4d1f-9162-3f16e3b8401b\") " pod="openshift-ingress/router-default-5444994796-9q8n5" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.875959 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4acd9e6a-53ba-41a6-9b34-0c93dd921150-images\") pod \"machine-config-operator-74547568cd-xv6dw\" (UID: \"4acd9e6a-53ba-41a6-9b34-0c93dd921150\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xv6dw" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.876837 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d9cf\" (UniqueName: \"kubernetes.io/projected/10252089-5c17-442d-bf25-f6aa45799274-kube-api-access-7d9cf\") pod \"multus-admission-controller-857f4d67dd-tglzn\" (UID: \"10252089-5c17-442d-bf25-f6aa45799274\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tglzn" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.876865 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4acd9e6a-53ba-41a6-9b34-0c93dd921150-auth-proxy-config\") pod \"machine-config-operator-74547568cd-xv6dw\" (UID: \"4acd9e6a-53ba-41a6-9b34-0c93dd921150\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xv6dw" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.876882 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/80a59a1a-e3b3-4e09-8be9-275fe4d95dff-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-v4rcn\" (UID: \"80a59a1a-e3b3-4e09-8be9-275fe4d95dff\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v4rcn" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.876902 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca237fb9-41ce-45a3-bc60-7439787888da-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xpk8s\" (UID: \"ca237fb9-41ce-45a3-bc60-7439787888da\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xpk8s" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.876932 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgbkp\" (UniqueName: \"kubernetes.io/projected/d48b65d8-de38-4d1f-9162-3f16e3b8401b-kube-api-access-pgbkp\") pod \"router-default-5444994796-9q8n5\" (UID: \"d48b65d8-de38-4d1f-9162-3f16e3b8401b\") " pod="openshift-ingress/router-default-5444994796-9q8n5" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.876949 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.876980 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.876999 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc917f74-9ee2-4f96-baa1-9bc802c0d448-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-p4lhv\" (UID: \"dc917f74-9ee2-4f96-baa1-9bc802c0d448\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4lhv" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.877031 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ee25a7d4-5043-48d7-91d1-68f2af96109a-registry-tls\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.877048 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.877066 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnwvl\" (UniqueName: \"kubernetes.io/projected/91bad959-dee9-48f7-90c6-7d7462b8cf9f-kube-api-access-nnwvl\") pod \"openshift-apiserver-operator-796bbdcf4f-d2f7r\" (UID: \"91bad959-dee9-48f7-90c6-7d7462b8cf9f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d2f7r" Jan 28 15:19:49 crc kubenswrapper[4871]: E0128 15:19:49.877509 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:50.377493467 +0000 UTC m=+142.273331789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.877797 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e058d636-1ce7-480a-a798-378083e7edf9-config\") pod \"kube-controller-manager-operator-78b949d7b-gqczj\" (UID: \"e058d636-1ce7-480a-a798-378083e7edf9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqczj" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.877818 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ee25a7d4-5043-48d7-91d1-68f2af96109a-registry-certificates\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.877833 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-audit-dir\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.877848 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/514e10d9-70a0-4a80-bd56-349522fad444-profile-collector-cert\") pod \"catalog-operator-68c6474976-mzvt5\" (UID: \"514e10d9-70a0-4a80-bd56-349522fad444\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzvt5" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.877868 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ee25a7d4-5043-48d7-91d1-68f2af96109a-bound-sa-token\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.877884 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-encryption-config\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.877898 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24ltz\" (UniqueName: \"kubernetes.io/projected/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-kube-api-access-24ltz\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.877917 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-image-import-ca\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.877934 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.885768 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2lxj6"] Jan 28 15:19:49 crc kubenswrapper[4871]: W0128 15:19:49.917567 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75f1faad_49ec_4a72_944b_8857e0752a8c.slice/crio-4a2155ec1657b2c6b411cbc98defce3e77e22a6480e14a83fdb4f51755370fe4 WatchSource:0}: Error finding container 4a2155ec1657b2c6b411cbc98defce3e77e22a6480e14a83fdb4f51755370fe4: Status 404 returned error can't find the container with id 4a2155ec1657b2c6b411cbc98defce3e77e22a6480e14a83fdb4f51755370fe4 Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.959235 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-c7255"] Jan 28 15:19:49 crc kubenswrapper[4871]: W0128 15:19:49.970959 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6a80b6c_f05d_400b_adb2_8a1637983435.slice/crio-870afee44db3338b91331c6919a121606649038a90e083236190590dcaebd80e WatchSource:0}: Error finding container 870afee44db3338b91331c6919a121606649038a90e083236190590dcaebd80e: Status 404 returned error can't find the container with id 870afee44db3338b91331c6919a121606649038a90e083236190590dcaebd80e Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.978681 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.978908 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.978936 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/10252089-5c17-442d-bf25-f6aa45799274-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tglzn\" (UID: \"10252089-5c17-442d-bf25-f6aa45799274\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tglzn" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.978956 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-node-pullsecrets\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.978983 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w44gq\" (UniqueName: \"kubernetes.io/projected/ee25a7d4-5043-48d7-91d1-68f2af96109a-kube-api-access-w44gq\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979004 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmsc2\" (UniqueName: \"kubernetes.io/projected/7128a8bf-85ad-45ed-bb6f-9efc037aca2b-kube-api-access-wmsc2\") pod \"openshift-config-operator-7777fb866f-vkb2v\" (UID: \"7128a8bf-85ad-45ed-bb6f-9efc037aca2b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vkb2v" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979021 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4acd9e6a-53ba-41a6-9b34-0c93dd921150-proxy-tls\") pod \"machine-config-operator-74547568cd-xv6dw\" (UID: \"4acd9e6a-53ba-41a6-9b34-0c93dd921150\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xv6dw" Jan 28 15:19:49 crc kubenswrapper[4871]: E0128 15:19:49.979068 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:50.479029871 +0000 UTC m=+142.374868193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979131 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d48b65d8-de38-4d1f-9162-3f16e3b8401b-stats-auth\") pod \"router-default-5444994796-9q8n5\" (UID: \"d48b65d8-de38-4d1f-9162-3f16e3b8401b\") " pod="openshift-ingress/router-default-5444994796-9q8n5" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979187 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7d69d2d-a07f-4fd5-a9d0-964e22792d42-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8xpb5\" (UID: \"e7d69d2d-a07f-4fd5-a9d0-964e22792d42\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8xpb5" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979221 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2571452b-5b45-43d1-bd39-35ef29c4fe80-service-ca\") pod \"console-f9d7485db-j85fr\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979257 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d48b65d8-de38-4d1f-9162-3f16e3b8401b-metrics-certs\") pod \"router-default-5444994796-9q8n5\" (UID: \"d48b65d8-de38-4d1f-9162-3f16e3b8401b\") " pod="openshift-ingress/router-default-5444994796-9q8n5" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979320 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2-socket-dir\") pod \"csi-hostpathplugin-fvzpr\" (UID: \"8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2\") " pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979379 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/09d59f6a-1c35-407f-b590-ad87ba92da70-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2j6j6\" (UID: \"09d59f6a-1c35-407f-b590-ad87ba92da70\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2j6j6" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979413 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlpfh\" (UniqueName: \"kubernetes.io/projected/28dc3b19-c5e4-4de6-889a-043b95a5f0f2-kube-api-access-jlpfh\") pod \"control-plane-machine-set-operator-78cbb6b69f-lqzgh\" (UID: \"28dc3b19-c5e4-4de6-889a-043b95a5f0f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lqzgh" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979438 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979467 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7128a8bf-85ad-45ed-bb6f-9efc037aca2b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vkb2v\" (UID: \"7128a8bf-85ad-45ed-bb6f-9efc037aca2b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vkb2v" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979494 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc917f74-9ee2-4f96-baa1-9bc802c0d448-config\") pod \"machine-api-operator-5694c8668f-p4lhv\" (UID: \"dc917f74-9ee2-4f96-baa1-9bc802c0d448\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4lhv" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979505 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-node-pullsecrets\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979517 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-etcd-client\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979576 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-config\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979616 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12fb6d93-ae53-42df-933b-7fe134353202-serving-cert\") pod \"service-ca-operator-777779d784-p4rlg\" (UID: \"12fb6d93-ae53-42df-933b-7fe134353202\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4rlg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979648 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-audit-dir\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979669 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d48b65d8-de38-4d1f-9162-3f16e3b8401b-service-ca-bundle\") pod \"router-default-5444994796-9q8n5\" (UID: \"d48b65d8-de38-4d1f-9162-3f16e3b8401b\") " pod="openshift-ingress/router-default-5444994796-9q8n5" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979724 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2571452b-5b45-43d1-bd39-35ef29c4fe80-console-config\") pod \"console-f9d7485db-j85fr\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979748 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/28dc3b19-c5e4-4de6-889a-043b95a5f0f2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-lqzgh\" (UID: \"28dc3b19-c5e4-4de6-889a-043b95a5f0f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lqzgh" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979772 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca237fb9-41ce-45a3-bc60-7439787888da-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xpk8s\" (UID: \"ca237fb9-41ce-45a3-bc60-7439787888da\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xpk8s" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979811 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-serving-cert\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979843 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c77ad76-1c21-4ce4-ad9d-88f90aa9b133-metrics-tls\") pod \"ingress-operator-5b745b69d9-mq867\" (UID: \"6c77ad76-1c21-4ce4-ad9d-88f90aa9b133\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mq867" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979882 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e82eb460-ef08-480a-93a3-7ed6a33ea0ac-tmpfs\") pod \"packageserver-d55dfcdfc-6cz8j\" (UID: \"e82eb460-ef08-480a-93a3-7ed6a33ea0ac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cz8j" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979916 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-audit-policies\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979943 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5e4044a-73ca-49f4-bd2a-835d862ad993-metrics-tls\") pod \"dns-default-qxtnb\" (UID: \"d5e4044a-73ca-49f4-bd2a-835d862ad993\") " pod="openshift-dns/dns-default-qxtnb" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.979986 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980008 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4acd9e6a-53ba-41a6-9b34-0c93dd921150-images\") pod \"machine-config-operator-74547568cd-xv6dw\" (UID: \"4acd9e6a-53ba-41a6-9b34-0c93dd921150\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xv6dw" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980045 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980067 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4acd9e6a-53ba-41a6-9b34-0c93dd921150-auth-proxy-config\") pod \"machine-config-operator-74547568cd-xv6dw\" (UID: \"4acd9e6a-53ba-41a6-9b34-0c93dd921150\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xv6dw" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980088 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/80a59a1a-e3b3-4e09-8be9-275fe4d95dff-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-v4rcn\" (UID: \"80a59a1a-e3b3-4e09-8be9-275fe4d95dff\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v4rcn" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980108 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2571452b-5b45-43d1-bd39-35ef29c4fe80-console-oauth-config\") pod \"console-f9d7485db-j85fr\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980155 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgbkp\" (UniqueName: \"kubernetes.io/projected/d48b65d8-de38-4d1f-9162-3f16e3b8401b-kube-api-access-pgbkp\") pod \"router-default-5444994796-9q8n5\" (UID: \"d48b65d8-de38-4d1f-9162-3f16e3b8401b\") " pod="openshift-ingress/router-default-5444994796-9q8n5" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980179 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9780dff9-f003-484a-af82-94fd7cd97a32-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zthwv\" (UID: \"9780dff9-f003-484a-af82-94fd7cd97a32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zthwv" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980214 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc917f74-9ee2-4f96-baa1-9bc802c0d448-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-p4lhv\" (UID: \"dc917f74-9ee2-4f96-baa1-9bc802c0d448\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4lhv" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980237 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm7pl\" (UniqueName: \"kubernetes.io/projected/af48622d-dc2b-4a71-8d73-c16573492222-kube-api-access-zm7pl\") pod \"olm-operator-6b444d44fb-wgnps\" (UID: \"af48622d-dc2b-4a71-8d73-c16573492222\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wgnps" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980268 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e058d636-1ce7-480a-a798-378083e7edf9-config\") pod \"kube-controller-manager-operator-78b949d7b-gqczj\" (UID: \"e058d636-1ce7-480a-a798-378083e7edf9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqczj" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980289 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2-plugins-dir\") pod \"csi-hostpathplugin-fvzpr\" (UID: \"8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2\") " pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980325 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ee25a7d4-5043-48d7-91d1-68f2af96109a-bound-sa-token\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980349 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/514e10d9-70a0-4a80-bd56-349522fad444-profile-collector-cert\") pod \"catalog-operator-68c6474976-mzvt5\" (UID: \"514e10d9-70a0-4a80-bd56-349522fad444\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzvt5" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980372 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c1ad2bd7-612e-4f52-a8f3-5338193ee404-machine-approver-tls\") pod \"machine-approver-56656f9798-4qvnc\" (UID: \"c1ad2bd7-612e-4f52-a8f3-5338193ee404\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4qvnc" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980392 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2-registration-dir\") pod \"csi-hostpathplugin-fvzpr\" (UID: \"8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2\") " pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980413 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2-csi-data-dir\") pod \"csi-hostpathplugin-fvzpr\" (UID: \"8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2\") " pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980458 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-encryption-config\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980479 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09d59f6a-1c35-407f-b590-ad87ba92da70-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2j6j6\" (UID: \"09d59f6a-1c35-407f-b590-ad87ba92da70\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2j6j6" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980499 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c77ad76-1c21-4ce4-ad9d-88f90aa9b133-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mq867\" (UID: \"6c77ad76-1c21-4ce4-ad9d-88f90aa9b133\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mq867" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980521 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89h8h\" (UniqueName: \"kubernetes.io/projected/3dcb49be-1798-4698-9d3c-39bf78d992e6-kube-api-access-89h8h\") pod \"downloads-7954f5f757-zdfpg\" (UID: \"3dcb49be-1798-4698-9d3c-39bf78d992e6\") " pod="openshift-console/downloads-7954f5f757-zdfpg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980556 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-etcd-serving-ca\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980577 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980614 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc917f74-9ee2-4f96-baa1-9bc802c0d448-images\") pod \"machine-api-operator-5694c8668f-p4lhv\" (UID: \"dc917f74-9ee2-4f96-baa1-9bc802c0d448\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4lhv" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980651 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2571452b-5b45-43d1-bd39-35ef29c4fe80-trusted-ca-bundle\") pod \"console-f9d7485db-j85fr\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980678 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2571452b-5b45-43d1-bd39-35ef29c4fe80-oauth-serving-cert\") pod \"console-f9d7485db-j85fr\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980746 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzb9l\" (UniqueName: \"kubernetes.io/projected/03ea1439-2fa7-44a0-ba99-457655fde2a6-kube-api-access-lzb9l\") pod \"kube-storage-version-migrator-operator-b67b599dd-jhdgw\" (UID: \"03ea1439-2fa7-44a0-ba99-457655fde2a6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jhdgw" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980792 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f7fv\" (UniqueName: \"kubernetes.io/projected/9780dff9-f003-484a-af82-94fd7cd97a32-kube-api-access-9f7fv\") pod \"authentication-operator-69f744f599-zthwv\" (UID: \"9780dff9-f003-484a-af82-94fd7cd97a32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zthwv" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980810 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad449099-f3be-4711-8c75-a8fab2eabda3-config-volume\") pod \"collect-profiles-29493555-5cqh5\" (UID: \"ad449099-f3be-4711-8c75-a8fab2eabda3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-5cqh5" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980844 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c1ad2bd7-612e-4f52-a8f3-5338193ee404-auth-proxy-config\") pod \"machine-approver-56656f9798-4qvnc\" (UID: \"c1ad2bd7-612e-4f52-a8f3-5338193ee404\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4qvnc" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980864 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2571452b-5b45-43d1-bd39-35ef29c4fe80-console-serving-cert\") pod \"console-f9d7485db-j85fr\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980886 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gctxv\" (UniqueName: \"kubernetes.io/projected/2571452b-5b45-43d1-bd39-35ef29c4fe80-kube-api-access-gctxv\") pod \"console-f9d7485db-j85fr\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980914 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91bad959-dee9-48f7-90c6-7d7462b8cf9f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-d2f7r\" (UID: \"91bad959-dee9-48f7-90c6-7d7462b8cf9f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d2f7r" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980936 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ee25a7d4-5043-48d7-91d1-68f2af96109a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980955 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c77ad76-1c21-4ce4-ad9d-88f90aa9b133-trusted-ca\") pod \"ingress-operator-5b745b69d9-mq867\" (UID: \"6c77ad76-1c21-4ce4-ad9d-88f90aa9b133\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mq867" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.980979 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ee25a7d4-5043-48d7-91d1-68f2af96109a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.982067 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e058d636-1ce7-480a-a798-378083e7edf9-config\") pod \"kube-controller-manager-operator-78b949d7b-gqczj\" (UID: \"e058d636-1ce7-480a-a798-378083e7edf9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqczj" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.982520 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d48b65d8-de38-4d1f-9162-3f16e3b8401b-service-ca-bundle\") pod \"router-default-5444994796-9q8n5\" (UID: \"d48b65d8-de38-4d1f-9162-3f16e3b8401b\") " pod="openshift-ingress/router-default-5444994796-9q8n5" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.983195 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc917f74-9ee2-4f96-baa1-9bc802c0d448-images\") pod \"machine-api-operator-5694c8668f-p4lhv\" (UID: \"dc917f74-9ee2-4f96-baa1-9bc802c0d448\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4lhv" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.985128 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-etcd-serving-ca\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.985964 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-audit-policies\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.987246 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-audit-dir\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.987266 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca237fb9-41ce-45a3-bc60-7439787888da-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xpk8s\" (UID: \"ca237fb9-41ce-45a3-bc60-7439787888da\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xpk8s" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.987710 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.987735 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7128a8bf-85ad-45ed-bb6f-9efc037aca2b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vkb2v\" (UID: \"7128a8bf-85ad-45ed-bb6f-9efc037aca2b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vkb2v" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.988462 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc917f74-9ee2-4f96-baa1-9bc802c0d448-config\") pod \"machine-api-operator-5694c8668f-p4lhv\" (UID: \"dc917f74-9ee2-4f96-baa1-9bc802c0d448\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4lhv" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.988822 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ee25a7d4-5043-48d7-91d1-68f2af96109a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.989248 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kwvq\" (UniqueName: \"kubernetes.io/projected/8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2-kube-api-access-4kwvq\") pod \"csi-hostpathplugin-fvzpr\" (UID: \"8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2\") " pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.989468 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrhxv\" (UniqueName: \"kubernetes.io/projected/88db3410-ef84-4e3b-821d-ef64a22c74f0-kube-api-access-xrhxv\") pod \"machine-config-server-pnb92\" (UID: \"88db3410-ef84-4e3b-821d-ef64a22c74f0\") " pod="openshift-machine-config-operator/machine-config-server-pnb92" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.989500 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7128a8bf-85ad-45ed-bb6f-9efc037aca2b-serving-cert\") pod \"openshift-config-operator-7777fb866f-vkb2v\" (UID: \"7128a8bf-85ad-45ed-bb6f-9efc037aca2b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vkb2v" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.989531 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4gmd\" (UniqueName: \"kubernetes.io/projected/09d59f6a-1c35-407f-b590-ad87ba92da70-kube-api-access-w4gmd\") pod \"cluster-image-registry-operator-dc59b4c8b-2j6j6\" (UID: \"09d59f6a-1c35-407f-b590-ad87ba92da70\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2j6j6" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.989559 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cth7n\" (UniqueName: \"kubernetes.io/projected/dc917f74-9ee2-4f96-baa1-9bc802c0d448-kube-api-access-cth7n\") pod \"machine-api-operator-5694c8668f-p4lhv\" (UID: \"dc917f74-9ee2-4f96-baa1-9bc802c0d448\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4lhv" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.989804 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/09d59f6a-1c35-407f-b590-ad87ba92da70-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2j6j6\" (UID: \"09d59f6a-1c35-407f-b590-ad87ba92da70\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2j6j6" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.989839 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.989863 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e058d636-1ce7-480a-a798-378083e7edf9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gqczj\" (UID: \"e058d636-1ce7-480a-a798-378083e7edf9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqczj" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.989882 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12fb6d93-ae53-42df-933b-7fe134353202-config\") pod \"service-ca-operator-777779d784-p4rlg\" (UID: \"12fb6d93-ae53-42df-933b-7fe134353202\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4rlg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.989915 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9780dff9-f003-484a-af82-94fd7cd97a32-serving-cert\") pod \"authentication-operator-69f744f599-zthwv\" (UID: \"9780dff9-f003-484a-af82-94fd7cd97a32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zthwv" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.990039 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/10252089-5c17-442d-bf25-f6aa45799274-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tglzn\" (UID: \"10252089-5c17-442d-bf25-f6aa45799274\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tglzn" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.990849 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-config\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.992380 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9780dff9-f003-484a-af82-94fd7cd97a32-service-ca-bundle\") pod \"authentication-operator-69f744f599-zthwv\" (UID: \"9780dff9-f003-484a-af82-94fd7cd97a32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zthwv" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.992429 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdxl9\" (UniqueName: \"kubernetes.io/projected/ca237fb9-41ce-45a3-bc60-7439787888da-kube-api-access-mdxl9\") pod \"openshift-controller-manager-operator-756b6f6bc6-xpk8s\" (UID: \"ca237fb9-41ce-45a3-bc60-7439787888da\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xpk8s" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.992450 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xswvb\" (UniqueName: \"kubernetes.io/projected/80a59a1a-e3b3-4e09-8be9-275fe4d95dff-kube-api-access-xswvb\") pod \"cluster-samples-operator-665b6dd947-v4rcn\" (UID: \"80a59a1a-e3b3-4e09-8be9-275fe4d95dff\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v4rcn" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.992474 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e058d636-1ce7-480a-a798-378083e7edf9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gqczj\" (UID: \"e058d636-1ce7-480a-a798-378083e7edf9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqczj" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.992501 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/af48622d-dc2b-4a71-8d73-c16573492222-srv-cert\") pod \"olm-operator-6b444d44fb-wgnps\" (UID: \"af48622d-dc2b-4a71-8d73-c16573492222\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wgnps" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.992525 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-audit\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.992575 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.992645 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91bad959-dee9-48f7-90c6-7d7462b8cf9f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-d2f7r\" (UID: \"91bad959-dee9-48f7-90c6-7d7462b8cf9f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d2f7r" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.992680 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2-mountpoint-dir\") pod \"csi-hostpathplugin-fvzpr\" (UID: \"8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2\") " pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.992698 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.992716 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.992727 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc917f74-9ee2-4f96-baa1-9bc802c0d448-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-p4lhv\" (UID: \"dc917f74-9ee2-4f96-baa1-9bc802c0d448\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4lhv" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.992736 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h6cm\" (UniqueName: \"kubernetes.io/projected/d5e4044a-73ca-49f4-bd2a-835d862ad993-kube-api-access-7h6cm\") pod \"dns-default-qxtnb\" (UID: \"d5e4044a-73ca-49f4-bd2a-835d862ad993\") " pod="openshift-dns/dns-default-qxtnb" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.992816 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00278fd9-55b0-4157-aec3-a06cc2b248fd-cert\") pod \"ingress-canary-cc8lr\" (UID: \"00278fd9-55b0-4157-aec3-a06cc2b248fd\") " pod="openshift-ingress-canary/ingress-canary-cc8lr" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.992942 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d48b65d8-de38-4d1f-9162-3f16e3b8401b-default-certificate\") pod \"router-default-5444994796-9q8n5\" (UID: \"d48b65d8-de38-4d1f-9162-3f16e3b8401b\") " pod="openshift-ingress/router-default-5444994796-9q8n5" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.992978 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g7p7\" (UniqueName: \"kubernetes.io/projected/e7d69d2d-a07f-4fd5-a9d0-964e22792d42-kube-api-access-2g7p7\") pod \"package-server-manager-789f6589d5-8xpb5\" (UID: \"e7d69d2d-a07f-4fd5-a9d0-964e22792d42\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8xpb5" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.993007 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smst8\" (UniqueName: \"kubernetes.io/projected/e82eb460-ef08-480a-93a3-7ed6a33ea0ac-kube-api-access-smst8\") pod \"packageserver-d55dfcdfc-6cz8j\" (UID: \"e82eb460-ef08-480a-93a3-7ed6a33ea0ac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cz8j" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.993013 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/514e10d9-70a0-4a80-bd56-349522fad444-profile-collector-cert\") pod \"catalog-operator-68c6474976-mzvt5\" (UID: \"514e10d9-70a0-4a80-bd56-349522fad444\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzvt5" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.993053 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d9cf\" (UniqueName: \"kubernetes.io/projected/10252089-5c17-442d-bf25-f6aa45799274-kube-api-access-7d9cf\") pod \"multus-admission-controller-857f4d67dd-tglzn\" (UID: \"10252089-5c17-442d-bf25-f6aa45799274\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tglzn" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.993077 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/88db3410-ef84-4e3b-821d-ef64a22c74f0-certs\") pod \"machine-config-server-pnb92\" (UID: \"88db3410-ef84-4e3b-821d-ef64a22c74f0\") " pod="openshift-machine-config-operator/machine-config-server-pnb92" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.993116 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca237fb9-41ce-45a3-bc60-7439787888da-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xpk8s\" (UID: \"ca237fb9-41ce-45a3-bc60-7439787888da\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xpk8s" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.993140 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/88db3410-ef84-4e3b-821d-ef64a22c74f0-node-bootstrap-token\") pod \"machine-config-server-pnb92\" (UID: \"88db3410-ef84-4e3b-821d-ef64a22c74f0\") " pod="openshift-machine-config-operator/machine-config-server-pnb92" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.993164 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sccph\" (UniqueName: \"kubernetes.io/projected/00278fd9-55b0-4157-aec3-a06cc2b248fd-kube-api-access-sccph\") pod \"ingress-canary-cc8lr\" (UID: \"00278fd9-55b0-4157-aec3-a06cc2b248fd\") " pod="openshift-ingress-canary/ingress-canary-cc8lr" Jan 28 15:19:49 crc kubenswrapper[4871]: I0128 15:19:49.993234 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-audit\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.006936 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91bad959-dee9-48f7-90c6-7d7462b8cf9f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-d2f7r\" (UID: \"91bad959-dee9-48f7-90c6-7d7462b8cf9f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d2f7r" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.007706 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.008213 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-serving-cert\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.008301 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4acd9e6a-53ba-41a6-9b34-0c93dd921150-proxy-tls\") pod \"machine-config-operator-74547568cd-xv6dw\" (UID: \"4acd9e6a-53ba-41a6-9b34-0c93dd921150\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xv6dw" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.008453 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/28dc3b19-c5e4-4de6-889a-043b95a5f0f2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-lqzgh\" (UID: \"28dc3b19-c5e4-4de6-889a-043b95a5f0f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lqzgh" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.008505 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-encryption-config\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.008685 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-etcd-client\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.008766 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.008834 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.008913 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ee25a7d4-5043-48d7-91d1-68f2af96109a-registry-tls\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.008986 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.008992 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.009038 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnwvl\" (UniqueName: \"kubernetes.io/projected/91bad959-dee9-48f7-90c6-7d7462b8cf9f-kube-api-access-nnwvl\") pod \"openshift-apiserver-operator-796bbdcf4f-d2f7r\" (UID: \"91bad959-dee9-48f7-90c6-7d7462b8cf9f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d2f7r" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.009096 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9780dff9-f003-484a-af82-94fd7cd97a32-config\") pod \"authentication-operator-69f744f599-zthwv\" (UID: \"9780dff9-f003-484a-af82-94fd7cd97a32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zthwv" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.009132 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e82eb460-ef08-480a-93a3-7ed6a33ea0ac-apiservice-cert\") pod \"packageserver-d55dfcdfc-6cz8j\" (UID: \"e82eb460-ef08-480a-93a3-7ed6a33ea0ac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cz8j" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.009178 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ee25a7d4-5043-48d7-91d1-68f2af96109a-registry-certificates\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.009217 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-audit-dir\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.009268 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03ea1439-2fa7-44a0-ba99-457655fde2a6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jhdgw\" (UID: \"03ea1439-2fa7-44a0-ba99-457655fde2a6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jhdgw" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.009435 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-audit-dir\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.009560 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ee25a7d4-5043-48d7-91d1-68f2af96109a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.010477 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.011983 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d48b65d8-de38-4d1f-9162-3f16e3b8401b-stats-auth\") pod \"router-default-5444994796-9q8n5\" (UID: \"d48b65d8-de38-4d1f-9162-3f16e3b8401b\") " pod="openshift-ingress/router-default-5444994796-9q8n5" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.012814 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4acd9e6a-53ba-41a6-9b34-0c93dd921150-auth-proxy-config\") pod \"machine-config-operator-74547568cd-xv6dw\" (UID: \"4acd9e6a-53ba-41a6-9b34-0c93dd921150\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xv6dw" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.013376 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4acd9e6a-53ba-41a6-9b34-0c93dd921150-images\") pod \"machine-config-operator-74547568cd-xv6dw\" (UID: \"4acd9e6a-53ba-41a6-9b34-0c93dd921150\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xv6dw" Jan 28 15:19:50 crc kubenswrapper[4871]: E0128 15:19:50.014399 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:50.514364376 +0000 UTC m=+142.410202698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.016903 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.016917 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-q48tf"] Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.017025 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24ltz\" (UniqueName: \"kubernetes.io/projected/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-kube-api-access-24ltz\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.018536 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e058d636-1ce7-480a-a798-378083e7edf9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gqczj\" (UID: \"e058d636-1ce7-480a-a798-378083e7edf9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqczj" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.017121 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-image-import-ca\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.022847 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca237fb9-41ce-45a3-bc60-7439787888da-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xpk8s\" (UID: \"ca237fb9-41ce-45a3-bc60-7439787888da\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xpk8s" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.024370 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.024514 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5e4044a-73ca-49f4-bd2a-835d862ad993-config-volume\") pod \"dns-default-qxtnb\" (UID: \"d5e4044a-73ca-49f4-bd2a-835d862ad993\") " pod="openshift-dns/dns-default-qxtnb" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.024642 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03ea1439-2fa7-44a0-ba99-457655fde2a6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jhdgw\" (UID: \"03ea1439-2fa7-44a0-ba99-457655fde2a6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jhdgw" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.024737 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b96ct\" (UniqueName: \"kubernetes.io/projected/6c77ad76-1c21-4ce4-ad9d-88f90aa9b133-kube-api-access-b96ct\") pod \"ingress-operator-5b745b69d9-mq867\" (UID: \"6c77ad76-1c21-4ce4-ad9d-88f90aa9b133\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mq867" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.024814 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1ad2bd7-612e-4f52-a8f3-5338193ee404-config\") pod \"machine-approver-56656f9798-4qvnc\" (UID: \"c1ad2bd7-612e-4f52-a8f3-5338193ee404\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4qvnc" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.024886 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvvfb\" (UniqueName: \"kubernetes.io/projected/c1ad2bd7-612e-4f52-a8f3-5338193ee404-kube-api-access-qvvfb\") pod \"machine-approver-56656f9798-4qvnc\" (UID: \"c1ad2bd7-612e-4f52-a8f3-5338193ee404\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4qvnc" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.025076 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/514e10d9-70a0-4a80-bd56-349522fad444-srv-cert\") pod \"catalog-operator-68c6474976-mzvt5\" (UID: \"514e10d9-70a0-4a80-bd56-349522fad444\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzvt5" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.025245 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb8kf\" (UniqueName: \"kubernetes.io/projected/4acd9e6a-53ba-41a6-9b34-0c93dd921150-kube-api-access-mb8kf\") pod \"machine-config-operator-74547568cd-xv6dw\" (UID: \"4acd9e6a-53ba-41a6-9b34-0c93dd921150\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xv6dw" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.028372 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad449099-f3be-4711-8c75-a8fab2eabda3-secret-volume\") pod \"collect-profiles-29493555-5cqh5\" (UID: \"ad449099-f3be-4711-8c75-a8fab2eabda3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-5cqh5" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.029581 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-image-import-ca\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.033250 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91bad959-dee9-48f7-90c6-7d7462b8cf9f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-d2f7r\" (UID: \"91bad959-dee9-48f7-90c6-7d7462b8cf9f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d2f7r" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.033724 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ee25a7d4-5043-48d7-91d1-68f2af96109a-registry-certificates\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.036355 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.037401 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e82eb460-ef08-480a-93a3-7ed6a33ea0ac-webhook-cert\") pod \"packageserver-d55dfcdfc-6cz8j\" (UID: \"e82eb460-ef08-480a-93a3-7ed6a33ea0ac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cz8j" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.037443 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/af48622d-dc2b-4a71-8d73-c16573492222-profile-collector-cert\") pod \"olm-operator-6b444d44fb-wgnps\" (UID: \"af48622d-dc2b-4a71-8d73-c16573492222\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wgnps" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.037494 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v8dl\" (UniqueName: \"kubernetes.io/projected/514e10d9-70a0-4a80-bd56-349522fad444-kube-api-access-6v8dl\") pod \"catalog-operator-68c6474976-mzvt5\" (UID: \"514e10d9-70a0-4a80-bd56-349522fad444\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzvt5" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.037570 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.037622 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jnjh\" (UniqueName: \"kubernetes.io/projected/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-kube-api-access-5jnjh\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.037653 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpk4t\" (UniqueName: \"kubernetes.io/projected/12fb6d93-ae53-42df-933b-7fe134353202-kube-api-access-qpk4t\") pod \"service-ca-operator-777779d784-p4rlg\" (UID: \"12fb6d93-ae53-42df-933b-7fe134353202\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4rlg" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.037689 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-862qt\" (UniqueName: \"kubernetes.io/projected/ad449099-f3be-4711-8c75-a8fab2eabda3-kube-api-access-862qt\") pod \"collect-profiles-29493555-5cqh5\" (UID: \"ad449099-f3be-4711-8c75-a8fab2eabda3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-5cqh5" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.037728 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee25a7d4-5043-48d7-91d1-68f2af96109a-trusted-ca\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.040710 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7128a8bf-85ad-45ed-bb6f-9efc037aca2b-serving-cert\") pod \"openshift-config-operator-7777fb866f-vkb2v\" (UID: \"7128a8bf-85ad-45ed-bb6f-9efc037aca2b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vkb2v" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.042649 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.074222 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee25a7d4-5043-48d7-91d1-68f2af96109a-trusted-ca\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.074683 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.075166 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/80a59a1a-e3b3-4e09-8be9-275fe4d95dff-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-v4rcn\" (UID: \"80a59a1a-e3b3-4e09-8be9-275fe4d95dff\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v4rcn" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.075395 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.076244 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.042820 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d48b65d8-de38-4d1f-9162-3f16e3b8401b-default-certificate\") pod \"router-default-5444994796-9q8n5\" (UID: \"d48b65d8-de38-4d1f-9162-3f16e3b8401b\") " pod="openshift-ingress/router-default-5444994796-9q8n5" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.076900 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ee25a7d4-5043-48d7-91d1-68f2af96109a-registry-tls\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.076977 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5ffkk"] Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.078464 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d48b65d8-de38-4d1f-9162-3f16e3b8401b-metrics-certs\") pod \"router-default-5444994796-9q8n5\" (UID: \"d48b65d8-de38-4d1f-9162-3f16e3b8401b\") " pod="openshift-ingress/router-default-5444994796-9q8n5" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.064848 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.079932 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/514e10d9-70a0-4a80-bd56-349522fad444-srv-cert\") pod \"catalog-operator-68c6474976-mzvt5\" (UID: \"514e10d9-70a0-4a80-bd56-349522fad444\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzvt5" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.081906 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w44gq\" (UniqueName: \"kubernetes.io/projected/ee25a7d4-5043-48d7-91d1-68f2af96109a-kube-api-access-w44gq\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.082042 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmsc2\" (UniqueName: \"kubernetes.io/projected/7128a8bf-85ad-45ed-bb6f-9efc037aca2b-kube-api-access-wmsc2\") pod \"openshift-config-operator-7777fb866f-vkb2v\" (UID: \"7128a8bf-85ad-45ed-bb6f-9efc037aca2b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vkb2v" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.092882 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgbkp\" (UniqueName: \"kubernetes.io/projected/d48b65d8-de38-4d1f-9162-3f16e3b8401b-kube-api-access-pgbkp\") pod \"router-default-5444994796-9q8n5\" (UID: \"d48b65d8-de38-4d1f-9162-3f16e3b8401b\") " pod="openshift-ingress/router-default-5444994796-9q8n5" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.095338 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-9q8n5" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.097188 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlpfh\" (UniqueName: \"kubernetes.io/projected/28dc3b19-c5e4-4de6-889a-043b95a5f0f2-kube-api-access-jlpfh\") pod \"control-plane-machine-set-operator-78cbb6b69f-lqzgh\" (UID: \"28dc3b19-c5e4-4de6-889a-043b95a5f0f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lqzgh" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.097601 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ee25a7d4-5043-48d7-91d1-68f2af96109a-bound-sa-token\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.115262 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cth7n\" (UniqueName: \"kubernetes.io/projected/dc917f74-9ee2-4f96-baa1-9bc802c0d448-kube-api-access-cth7n\") pod \"machine-api-operator-5694c8668f-p4lhv\" (UID: \"dc917f74-9ee2-4f96-baa1-9bc802c0d448\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4lhv" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.139584 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.139990 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7d69d2d-a07f-4fd5-a9d0-964e22792d42-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8xpb5\" (UID: \"e7d69d2d-a07f-4fd5-a9d0-964e22792d42\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8xpb5" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.140074 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2571452b-5b45-43d1-bd39-35ef29c4fe80-service-ca\") pod \"console-f9d7485db-j85fr\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.140435 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2-socket-dir\") pod \"csi-hostpathplugin-fvzpr\" (UID: \"8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2\") " pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.140475 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/09d59f6a-1c35-407f-b590-ad87ba92da70-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2j6j6\" (UID: \"09d59f6a-1c35-407f-b590-ad87ba92da70\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2j6j6" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.140510 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12fb6d93-ae53-42df-933b-7fe134353202-serving-cert\") pod \"service-ca-operator-777779d784-p4rlg\" (UID: \"12fb6d93-ae53-42df-933b-7fe134353202\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4rlg" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.140549 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2571452b-5b45-43d1-bd39-35ef29c4fe80-console-config\") pod \"console-f9d7485db-j85fr\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.140580 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e82eb460-ef08-480a-93a3-7ed6a33ea0ac-tmpfs\") pod \"packageserver-d55dfcdfc-6cz8j\" (UID: \"e82eb460-ef08-480a-93a3-7ed6a33ea0ac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cz8j" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.140619 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c77ad76-1c21-4ce4-ad9d-88f90aa9b133-metrics-tls\") pod \"ingress-operator-5b745b69d9-mq867\" (UID: \"6c77ad76-1c21-4ce4-ad9d-88f90aa9b133\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mq867" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.140643 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5e4044a-73ca-49f4-bd2a-835d862ad993-metrics-tls\") pod \"dns-default-qxtnb\" (UID: \"d5e4044a-73ca-49f4-bd2a-835d862ad993\") " pod="openshift-dns/dns-default-qxtnb" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.140668 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2571452b-5b45-43d1-bd39-35ef29c4fe80-console-oauth-config\") pod \"console-f9d7485db-j85fr\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.140694 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9780dff9-f003-484a-af82-94fd7cd97a32-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zthwv\" (UID: \"9780dff9-f003-484a-af82-94fd7cd97a32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zthwv" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.140732 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm7pl\" (UniqueName: \"kubernetes.io/projected/af48622d-dc2b-4a71-8d73-c16573492222-kube-api-access-zm7pl\") pod \"olm-operator-6b444d44fb-wgnps\" (UID: \"af48622d-dc2b-4a71-8d73-c16573492222\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wgnps" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.140756 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2-plugins-dir\") pod \"csi-hostpathplugin-fvzpr\" (UID: \"8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2\") " pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.140780 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c1ad2bd7-612e-4f52-a8f3-5338193ee404-machine-approver-tls\") pod \"machine-approver-56656f9798-4qvnc\" (UID: \"c1ad2bd7-612e-4f52-a8f3-5338193ee404\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4qvnc" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.140804 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2-registration-dir\") pod \"csi-hostpathplugin-fvzpr\" (UID: \"8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2\") " pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.140825 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2-csi-data-dir\") pod \"csi-hostpathplugin-fvzpr\" (UID: \"8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2\") " pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.140850 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09d59f6a-1c35-407f-b590-ad87ba92da70-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2j6j6\" (UID: \"09d59f6a-1c35-407f-b590-ad87ba92da70\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2j6j6" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141016 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c77ad76-1c21-4ce4-ad9d-88f90aa9b133-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mq867\" (UID: \"6c77ad76-1c21-4ce4-ad9d-88f90aa9b133\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mq867" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141087 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89h8h\" (UniqueName: \"kubernetes.io/projected/3dcb49be-1798-4698-9d3c-39bf78d992e6-kube-api-access-89h8h\") pod \"downloads-7954f5f757-zdfpg\" (UID: \"3dcb49be-1798-4698-9d3c-39bf78d992e6\") " pod="openshift-console/downloads-7954f5f757-zdfpg" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141118 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2571452b-5b45-43d1-bd39-35ef29c4fe80-trusted-ca-bundle\") pod \"console-f9d7485db-j85fr\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141146 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2571452b-5b45-43d1-bd39-35ef29c4fe80-oauth-serving-cert\") pod \"console-f9d7485db-j85fr\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141175 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzb9l\" (UniqueName: \"kubernetes.io/projected/03ea1439-2fa7-44a0-ba99-457655fde2a6-kube-api-access-lzb9l\") pod \"kube-storage-version-migrator-operator-b67b599dd-jhdgw\" (UID: \"03ea1439-2fa7-44a0-ba99-457655fde2a6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jhdgw" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141201 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad449099-f3be-4711-8c75-a8fab2eabda3-config-volume\") pod \"collect-profiles-29493555-5cqh5\" (UID: \"ad449099-f3be-4711-8c75-a8fab2eabda3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-5cqh5" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141225 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f7fv\" (UniqueName: \"kubernetes.io/projected/9780dff9-f003-484a-af82-94fd7cd97a32-kube-api-access-9f7fv\") pod \"authentication-operator-69f744f599-zthwv\" (UID: \"9780dff9-f003-484a-af82-94fd7cd97a32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zthwv" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141249 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2571452b-5b45-43d1-bd39-35ef29c4fe80-console-serving-cert\") pod \"console-f9d7485db-j85fr\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141270 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gctxv\" (UniqueName: \"kubernetes.io/projected/2571452b-5b45-43d1-bd39-35ef29c4fe80-kube-api-access-gctxv\") pod \"console-f9d7485db-j85fr\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141292 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c1ad2bd7-612e-4f52-a8f3-5338193ee404-auth-proxy-config\") pod \"machine-approver-56656f9798-4qvnc\" (UID: \"c1ad2bd7-612e-4f52-a8f3-5338193ee404\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4qvnc" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141315 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c77ad76-1c21-4ce4-ad9d-88f90aa9b133-trusted-ca\") pod \"ingress-operator-5b745b69d9-mq867\" (UID: \"6c77ad76-1c21-4ce4-ad9d-88f90aa9b133\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mq867" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141341 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kwvq\" (UniqueName: \"kubernetes.io/projected/8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2-kube-api-access-4kwvq\") pod \"csi-hostpathplugin-fvzpr\" (UID: \"8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2\") " pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141370 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrhxv\" (UniqueName: \"kubernetes.io/projected/88db3410-ef84-4e3b-821d-ef64a22c74f0-kube-api-access-xrhxv\") pod \"machine-config-server-pnb92\" (UID: \"88db3410-ef84-4e3b-821d-ef64a22c74f0\") " pod="openshift-machine-config-operator/machine-config-server-pnb92" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141395 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4gmd\" (UniqueName: \"kubernetes.io/projected/09d59f6a-1c35-407f-b590-ad87ba92da70-kube-api-access-w4gmd\") pod \"cluster-image-registry-operator-dc59b4c8b-2j6j6\" (UID: \"09d59f6a-1c35-407f-b590-ad87ba92da70\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2j6j6" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141424 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/09d59f6a-1c35-407f-b590-ad87ba92da70-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2j6j6\" (UID: \"09d59f6a-1c35-407f-b590-ad87ba92da70\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2j6j6" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141462 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12fb6d93-ae53-42df-933b-7fe134353202-config\") pod \"service-ca-operator-777779d784-p4rlg\" (UID: \"12fb6d93-ae53-42df-933b-7fe134353202\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4rlg" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141486 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9780dff9-f003-484a-af82-94fd7cd97a32-serving-cert\") pod \"authentication-operator-69f744f599-zthwv\" (UID: \"9780dff9-f003-484a-af82-94fd7cd97a32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zthwv" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141526 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9780dff9-f003-484a-af82-94fd7cd97a32-service-ca-bundle\") pod \"authentication-operator-69f744f599-zthwv\" (UID: \"9780dff9-f003-484a-af82-94fd7cd97a32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zthwv" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141550 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/af48622d-dc2b-4a71-8d73-c16573492222-srv-cert\") pod \"olm-operator-6b444d44fb-wgnps\" (UID: \"af48622d-dc2b-4a71-8d73-c16573492222\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wgnps" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141581 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2-mountpoint-dir\") pod \"csi-hostpathplugin-fvzpr\" (UID: \"8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2\") " pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141619 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h6cm\" (UniqueName: \"kubernetes.io/projected/d5e4044a-73ca-49f4-bd2a-835d862ad993-kube-api-access-7h6cm\") pod \"dns-default-qxtnb\" (UID: \"d5e4044a-73ca-49f4-bd2a-835d862ad993\") " pod="openshift-dns/dns-default-qxtnb" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141638 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00278fd9-55b0-4157-aec3-a06cc2b248fd-cert\") pod \"ingress-canary-cc8lr\" (UID: \"00278fd9-55b0-4157-aec3-a06cc2b248fd\") " pod="openshift-ingress-canary/ingress-canary-cc8lr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141661 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smst8\" (UniqueName: \"kubernetes.io/projected/e82eb460-ef08-480a-93a3-7ed6a33ea0ac-kube-api-access-smst8\") pod \"packageserver-d55dfcdfc-6cz8j\" (UID: \"e82eb460-ef08-480a-93a3-7ed6a33ea0ac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cz8j" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141689 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g7p7\" (UniqueName: \"kubernetes.io/projected/e7d69d2d-a07f-4fd5-a9d0-964e22792d42-kube-api-access-2g7p7\") pod \"package-server-manager-789f6589d5-8xpb5\" (UID: \"e7d69d2d-a07f-4fd5-a9d0-964e22792d42\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8xpb5" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141722 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/88db3410-ef84-4e3b-821d-ef64a22c74f0-node-bootstrap-token\") pod \"machine-config-server-pnb92\" (UID: \"88db3410-ef84-4e3b-821d-ef64a22c74f0\") " pod="openshift-machine-config-operator/machine-config-server-pnb92" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141746 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/88db3410-ef84-4e3b-821d-ef64a22c74f0-certs\") pod \"machine-config-server-pnb92\" (UID: \"88db3410-ef84-4e3b-821d-ef64a22c74f0\") " pod="openshift-machine-config-operator/machine-config-server-pnb92" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141778 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sccph\" (UniqueName: \"kubernetes.io/projected/00278fd9-55b0-4157-aec3-a06cc2b248fd-kube-api-access-sccph\") pod \"ingress-canary-cc8lr\" (UID: \"00278fd9-55b0-4157-aec3-a06cc2b248fd\") " pod="openshift-ingress-canary/ingress-canary-cc8lr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141809 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9780dff9-f003-484a-af82-94fd7cd97a32-config\") pod \"authentication-operator-69f744f599-zthwv\" (UID: \"9780dff9-f003-484a-af82-94fd7cd97a32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zthwv" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141834 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e82eb460-ef08-480a-93a3-7ed6a33ea0ac-apiservice-cert\") pod \"packageserver-d55dfcdfc-6cz8j\" (UID: \"e82eb460-ef08-480a-93a3-7ed6a33ea0ac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cz8j" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141857 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03ea1439-2fa7-44a0-ba99-457655fde2a6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jhdgw\" (UID: \"03ea1439-2fa7-44a0-ba99-457655fde2a6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jhdgw" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141894 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5e4044a-73ca-49f4-bd2a-835d862ad993-config-volume\") pod \"dns-default-qxtnb\" (UID: \"d5e4044a-73ca-49f4-bd2a-835d862ad993\") " pod="openshift-dns/dns-default-qxtnb" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141916 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03ea1439-2fa7-44a0-ba99-457655fde2a6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jhdgw\" (UID: \"03ea1439-2fa7-44a0-ba99-457655fde2a6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jhdgw" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141938 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b96ct\" (UniqueName: \"kubernetes.io/projected/6c77ad76-1c21-4ce4-ad9d-88f90aa9b133-kube-api-access-b96ct\") pod \"ingress-operator-5b745b69d9-mq867\" (UID: \"6c77ad76-1c21-4ce4-ad9d-88f90aa9b133\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mq867" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141962 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvvfb\" (UniqueName: \"kubernetes.io/projected/c1ad2bd7-612e-4f52-a8f3-5338193ee404-kube-api-access-qvvfb\") pod \"machine-approver-56656f9798-4qvnc\" (UID: \"c1ad2bd7-612e-4f52-a8f3-5338193ee404\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4qvnc" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.141986 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1ad2bd7-612e-4f52-a8f3-5338193ee404-config\") pod \"machine-approver-56656f9798-4qvnc\" (UID: \"c1ad2bd7-612e-4f52-a8f3-5338193ee404\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4qvnc" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.142069 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad449099-f3be-4711-8c75-a8fab2eabda3-secret-volume\") pod \"collect-profiles-29493555-5cqh5\" (UID: \"ad449099-f3be-4711-8c75-a8fab2eabda3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-5cqh5" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.142099 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/af48622d-dc2b-4a71-8d73-c16573492222-profile-collector-cert\") pod \"olm-operator-6b444d44fb-wgnps\" (UID: \"af48622d-dc2b-4a71-8d73-c16573492222\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wgnps" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.142179 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e82eb460-ef08-480a-93a3-7ed6a33ea0ac-webhook-cert\") pod \"packageserver-d55dfcdfc-6cz8j\" (UID: \"e82eb460-ef08-480a-93a3-7ed6a33ea0ac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cz8j" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.142206 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpk4t\" (UniqueName: \"kubernetes.io/projected/12fb6d93-ae53-42df-933b-7fe134353202-kube-api-access-qpk4t\") pod \"service-ca-operator-777779d784-p4rlg\" (UID: \"12fb6d93-ae53-42df-933b-7fe134353202\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4rlg" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.142230 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-862qt\" (UniqueName: \"kubernetes.io/projected/ad449099-f3be-4711-8c75-a8fab2eabda3-kube-api-access-862qt\") pod \"collect-profiles-29493555-5cqh5\" (UID: \"ad449099-f3be-4711-8c75-a8fab2eabda3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-5cqh5" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.143968 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2571452b-5b45-43d1-bd39-35ef29c4fe80-oauth-serving-cert\") pod \"console-f9d7485db-j85fr\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.145925 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2571452b-5b45-43d1-bd39-35ef29c4fe80-console-config\") pod \"console-f9d7485db-j85fr\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.146229 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad449099-f3be-4711-8c75-a8fab2eabda3-config-volume\") pod \"collect-profiles-29493555-5cqh5\" (UID: \"ad449099-f3be-4711-8c75-a8fab2eabda3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-5cqh5" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.146318 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/09d59f6a-1c35-407f-b590-ad87ba92da70-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2j6j6\" (UID: \"09d59f6a-1c35-407f-b590-ad87ba92da70\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2j6j6" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.147014 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12fb6d93-ae53-42df-933b-7fe134353202-config\") pod \"service-ca-operator-777779d784-p4rlg\" (UID: \"12fb6d93-ae53-42df-933b-7fe134353202\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4rlg" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.147122 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2571452b-5b45-43d1-bd39-35ef29c4fe80-trusted-ca-bundle\") pod \"console-f9d7485db-j85fr\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:19:50 crc kubenswrapper[4871]: E0128 15:19:50.148153 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:50.648130487 +0000 UTC m=+142.543968809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.148380 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c1ad2bd7-612e-4f52-a8f3-5338193ee404-auth-proxy-config\") pod \"machine-approver-56656f9798-4qvnc\" (UID: \"c1ad2bd7-612e-4f52-a8f3-5338193ee404\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4qvnc" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.149540 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2-registration-dir\") pod \"csi-hostpathplugin-fvzpr\" (UID: \"8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2\") " pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.149540 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c77ad76-1c21-4ce4-ad9d-88f90aa9b133-trusted-ca\") pod \"ingress-operator-5b745b69d9-mq867\" (UID: \"6c77ad76-1c21-4ce4-ad9d-88f90aa9b133\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mq867" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.149713 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9780dff9-f003-484a-af82-94fd7cd97a32-config\") pod \"authentication-operator-69f744f599-zthwv\" (UID: \"9780dff9-f003-484a-af82-94fd7cd97a32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zthwv" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.150682 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9780dff9-f003-484a-af82-94fd7cd97a32-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zthwv\" (UID: \"9780dff9-f003-484a-af82-94fd7cd97a32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zthwv" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.150837 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2-plugins-dir\") pod \"csi-hostpathplugin-fvzpr\" (UID: \"8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2\") " pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.151928 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12fb6d93-ae53-42df-933b-7fe134353202-serving-cert\") pod \"service-ca-operator-777779d784-p4rlg\" (UID: \"12fb6d93-ae53-42df-933b-7fe134353202\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4rlg" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.153176 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/88db3410-ef84-4e3b-821d-ef64a22c74f0-node-bootstrap-token\") pod \"machine-config-server-pnb92\" (UID: \"88db3410-ef84-4e3b-821d-ef64a22c74f0\") " pod="openshift-machine-config-operator/machine-config-server-pnb92" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.154280 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c1ad2bd7-612e-4f52-a8f3-5338193ee404-machine-approver-tls\") pod \"machine-approver-56656f9798-4qvnc\" (UID: \"c1ad2bd7-612e-4f52-a8f3-5338193ee404\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4qvnc" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.155780 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2571452b-5b45-43d1-bd39-35ef29c4fe80-service-ca\") pod \"console-f9d7485db-j85fr\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.156086 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1ad2bd7-612e-4f52-a8f3-5338193ee404-config\") pod \"machine-approver-56656f9798-4qvnc\" (UID: \"c1ad2bd7-612e-4f52-a8f3-5338193ee404\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4qvnc" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.158406 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9780dff9-f003-484a-af82-94fd7cd97a32-service-ca-bundle\") pod \"authentication-operator-69f744f599-zthwv\" (UID: \"9780dff9-f003-484a-af82-94fd7cd97a32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zthwv" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.160227 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e82eb460-ef08-480a-93a3-7ed6a33ea0ac-tmpfs\") pod \"packageserver-d55dfcdfc-6cz8j\" (UID: \"e82eb460-ef08-480a-93a3-7ed6a33ea0ac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cz8j" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.160246 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2-mountpoint-dir\") pod \"csi-hostpathplugin-fvzpr\" (UID: \"8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2\") " pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.160842 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03ea1439-2fa7-44a0-ba99-457655fde2a6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jhdgw\" (UID: \"03ea1439-2fa7-44a0-ba99-457655fde2a6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jhdgw" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.161020 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2-socket-dir\") pod \"csi-hostpathplugin-fvzpr\" (UID: \"8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2\") " pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.161343 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e82eb460-ef08-480a-93a3-7ed6a33ea0ac-apiservice-cert\") pod \"packageserver-d55dfcdfc-6cz8j\" (UID: \"e82eb460-ef08-480a-93a3-7ed6a33ea0ac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cz8j" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.161561 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5e4044a-73ca-49f4-bd2a-835d862ad993-config-volume\") pod \"dns-default-qxtnb\" (UID: \"d5e4044a-73ca-49f4-bd2a-835d862ad993\") " pod="openshift-dns/dns-default-qxtnb" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.161919 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2-csi-data-dir\") pod \"csi-hostpathplugin-fvzpr\" (UID: \"8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2\") " pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.161931 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-l2q84"] Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.163440 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad449099-f3be-4711-8c75-a8fab2eabda3-secret-volume\") pod \"collect-profiles-29493555-5cqh5\" (UID: \"ad449099-f3be-4711-8c75-a8fab2eabda3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-5cqh5" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.163921 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09d59f6a-1c35-407f-b590-ad87ba92da70-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2j6j6\" (UID: \"09d59f6a-1c35-407f-b590-ad87ba92da70\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2j6j6" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.164695 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e82eb460-ef08-480a-93a3-7ed6a33ea0ac-webhook-cert\") pod \"packageserver-d55dfcdfc-6cz8j\" (UID: \"e82eb460-ef08-480a-93a3-7ed6a33ea0ac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cz8j" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.166029 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03ea1439-2fa7-44a0-ba99-457655fde2a6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jhdgw\" (UID: \"03ea1439-2fa7-44a0-ba99-457655fde2a6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jhdgw" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.169001 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/88db3410-ef84-4e3b-821d-ef64a22c74f0-certs\") pod \"machine-config-server-pnb92\" (UID: \"88db3410-ef84-4e3b-821d-ef64a22c74f0\") " pod="openshift-machine-config-operator/machine-config-server-pnb92" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.169440 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/af48622d-dc2b-4a71-8d73-c16573492222-srv-cert\") pod \"olm-operator-6b444d44fb-wgnps\" (UID: \"af48622d-dc2b-4a71-8d73-c16573492222\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wgnps" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.171742 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5e4044a-73ca-49f4-bd2a-835d862ad993-metrics-tls\") pod \"dns-default-qxtnb\" (UID: \"d5e4044a-73ca-49f4-bd2a-835d862ad993\") " pod="openshift-dns/dns-default-qxtnb" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.172360 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2571452b-5b45-43d1-bd39-35ef29c4fe80-console-oauth-config\") pod \"console-f9d7485db-j85fr\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.172461 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00278fd9-55b0-4157-aec3-a06cc2b248fd-cert\") pod \"ingress-canary-cc8lr\" (UID: \"00278fd9-55b0-4157-aec3-a06cc2b248fd\") " pod="openshift-ingress-canary/ingress-canary-cc8lr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.172835 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c77ad76-1c21-4ce4-ad9d-88f90aa9b133-metrics-tls\") pod \"ingress-operator-5b745b69d9-mq867\" (UID: \"6c77ad76-1c21-4ce4-ad9d-88f90aa9b133\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mq867" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.173011 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2571452b-5b45-43d1-bd39-35ef29c4fe80-console-serving-cert\") pod \"console-f9d7485db-j85fr\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.187250 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e058d636-1ce7-480a-a798-378083e7edf9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gqczj\" (UID: \"e058d636-1ce7-480a-a798-378083e7edf9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqczj" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.187808 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9780dff9-f003-484a-af82-94fd7cd97a32-serving-cert\") pod \"authentication-operator-69f744f599-zthwv\" (UID: \"9780dff9-f003-484a-af82-94fd7cd97a32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zthwv" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.189343 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/af48622d-dc2b-4a71-8d73-c16573492222-profile-collector-cert\") pod \"olm-operator-6b444d44fb-wgnps\" (UID: \"af48622d-dc2b-4a71-8d73-c16573492222\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wgnps" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.189354 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7d69d2d-a07f-4fd5-a9d0-964e22792d42-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8xpb5\" (UID: \"e7d69d2d-a07f-4fd5-a9d0-964e22792d42\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8xpb5" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.190124 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l"] Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.194964 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdxl9\" (UniqueName: \"kubernetes.io/projected/ca237fb9-41ce-45a3-bc60-7439787888da-kube-api-access-mdxl9\") pod \"openshift-controller-manager-operator-756b6f6bc6-xpk8s\" (UID: \"ca237fb9-41ce-45a3-bc60-7439787888da\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xpk8s" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.195989 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d9cf\" (UniqueName: \"kubernetes.io/projected/10252089-5c17-442d-bf25-f6aa45799274-kube-api-access-7d9cf\") pod \"multus-admission-controller-857f4d67dd-tglzn\" (UID: \"10252089-5c17-442d-bf25-f6aa45799274\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tglzn" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.210749 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xswvb\" (UniqueName: \"kubernetes.io/projected/80a59a1a-e3b3-4e09-8be9-275fe4d95dff-kube-api-access-xswvb\") pod \"cluster-samples-operator-665b6dd947-v4rcn\" (UID: \"80a59a1a-e3b3-4e09-8be9-275fe4d95dff\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v4rcn" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.219535 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bh7lg"] Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.228040 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnwvl\" (UniqueName: \"kubernetes.io/projected/91bad959-dee9-48f7-90c6-7d7462b8cf9f-kube-api-access-nnwvl\") pod \"openshift-apiserver-operator-796bbdcf4f-d2f7r\" (UID: \"91bad959-dee9-48f7-90c6-7d7462b8cf9f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d2f7r" Jan 28 15:19:50 crc kubenswrapper[4871]: W0128 15:19:50.239550 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5226d39_f16c_4e81_8ae2_8d5f54a8a683.slice/crio-5bbb7a4551178db27120a612bb76c77062154588e8e9d04831016b89d1d1f074 WatchSource:0}: Error finding container 5bbb7a4551178db27120a612bb76c77062154588e8e9d04831016b89d1d1f074: Status 404 returned error can't find the container with id 5bbb7a4551178db27120a612bb76c77062154588e8e9d04831016b89d1d1f074 Jan 28 15:19:50 crc kubenswrapper[4871]: W0128 15:19:50.240082 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cdb94bb_6eed_48e8_b6fc_7a14d7e2b0ce.slice/crio-ccf0897d58bbb3ed2c58c5c653a34b4e77057439dc1ab4980ebc830fd9d25b2d WatchSource:0}: Error finding container ccf0897d58bbb3ed2c58c5c653a34b4e77057439dc1ab4980ebc830fd9d25b2d: Status 404 returned error can't find the container with id ccf0897d58bbb3ed2c58c5c653a34b4e77057439dc1ab4980ebc830fd9d25b2d Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.243266 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:50 crc kubenswrapper[4871]: E0128 15:19:50.243676 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:50.743653768 +0000 UTC m=+142.639492130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:50 crc kubenswrapper[4871]: W0128 15:19:50.247231 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1075f46c_ae1a_4c66_b7ad_5f5f5942fdd6.slice/crio-4aa0d9a0b83bae98e812ce1fc836f456b8366b1c9346e8ec544565bdbee537e0 WatchSource:0}: Error finding container 4aa0d9a0b83bae98e812ce1fc836f456b8366b1c9346e8ec544565bdbee537e0: Status 404 returned error can't find the container with id 4aa0d9a0b83bae98e812ce1fc836f456b8366b1c9346e8ec544565bdbee537e0 Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.251290 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v4rcn" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.252179 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24ltz\" (UniqueName: \"kubernetes.io/projected/5409ad1c-96fd-4c09-aeed-7a9722b0abc1-kube-api-access-24ltz\") pod \"apiserver-76f77b778f-jn4dg\" (UID: \"5409ad1c-96fd-4c09-aeed-7a9722b0abc1\") " pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.268608 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jnjh\" (UniqueName: \"kubernetes.io/projected/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-kube-api-access-5jnjh\") pod \"oauth-openshift-558db77b4-dzwqq\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.289506 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.292984 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v8dl\" (UniqueName: \"kubernetes.io/projected/514e10d9-70a0-4a80-bd56-349522fad444-kube-api-access-6v8dl\") pod \"catalog-operator-68c6474976-mzvt5\" (UID: \"514e10d9-70a0-4a80-bd56-349522fad444\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzvt5" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.297341 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vkb2v" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.302064 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vxr9x"] Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.304868 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xpk8s" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.307011 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb8kf\" (UniqueName: \"kubernetes.io/projected/4acd9e6a-53ba-41a6-9b34-0c93dd921150-kube-api-access-mb8kf\") pod \"machine-config-operator-74547568cd-xv6dw\" (UID: \"4acd9e6a-53ba-41a6-9b34-0c93dd921150\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xv6dw" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.313725 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xv6dw" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.324021 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-p4lhv" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.336088 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-tglzn" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.342427 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqczj" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.344217 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:50 crc kubenswrapper[4871]: E0128 15:19:50.344487 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:50.844472889 +0000 UTC m=+142.740311211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.344568 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:50 crc kubenswrapper[4871]: E0128 15:19:50.344827 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:50.844821331 +0000 UTC m=+142.740659653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.352352 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrhxv\" (UniqueName: \"kubernetes.io/projected/88db3410-ef84-4e3b-821d-ef64a22c74f0-kube-api-access-xrhxv\") pod \"machine-config-server-pnb92\" (UID: \"88db3410-ef84-4e3b-821d-ef64a22c74f0\") " pod="openshift-machine-config-operator/machine-config-server-pnb92" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.366041 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d2f7r" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.373279 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-862qt\" (UniqueName: \"kubernetes.io/projected/ad449099-f3be-4711-8c75-a8fab2eabda3-kube-api-access-862qt\") pod \"collect-profiles-29493555-5cqh5\" (UID: \"ad449099-f3be-4711-8c75-a8fab2eabda3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-5cqh5" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.373895 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lqzgh" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.389931 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4gmd\" (UniqueName: \"kubernetes.io/projected/09d59f6a-1c35-407f-b590-ad87ba92da70-kube-api-access-w4gmd\") pod \"cluster-image-registry-operator-dc59b4c8b-2j6j6\" (UID: \"09d59f6a-1c35-407f-b590-ad87ba92da70\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2j6j6" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.392755 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzvt5" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.406372 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzb9l\" (UniqueName: \"kubernetes.io/projected/03ea1439-2fa7-44a0-ba99-457655fde2a6-kube-api-access-lzb9l\") pod \"kube-storage-version-migrator-operator-b67b599dd-jhdgw\" (UID: \"03ea1439-2fa7-44a0-ba99-457655fde2a6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jhdgw" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.427811 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jhdgw" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.433068 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f7fv\" (UniqueName: \"kubernetes.io/projected/9780dff9-f003-484a-af82-94fd7cd97a32-kube-api-access-9f7fv\") pod \"authentication-operator-69f744f599-zthwv\" (UID: \"9780dff9-f003-484a-af82-94fd7cd97a32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zthwv" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.436631 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-5cqh5" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.445046 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:50 crc kubenswrapper[4871]: E0128 15:19:50.445493 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:50.945479066 +0000 UTC m=+142.841317388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.448403 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/09d59f6a-1c35-407f-b590-ad87ba92da70-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2j6j6\" (UID: \"09d59f6a-1c35-407f-b590-ad87ba92da70\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2j6j6" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.455217 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-zthwv" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.475038 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gctxv\" (UniqueName: \"kubernetes.io/projected/2571452b-5b45-43d1-bd39-35ef29c4fe80-kube-api-access-gctxv\") pod \"console-f9d7485db-j85fr\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.477666 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.497556 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smst8\" (UniqueName: \"kubernetes.io/projected/e82eb460-ef08-480a-93a3-7ed6a33ea0ac-kube-api-access-smst8\") pod \"packageserver-d55dfcdfc-6cz8j\" (UID: \"e82eb460-ef08-480a-93a3-7ed6a33ea0ac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cz8j" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.502139 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.515756 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g7p7\" (UniqueName: \"kubernetes.io/projected/e7d69d2d-a07f-4fd5-a9d0-964e22792d42-kube-api-access-2g7p7\") pod \"package-server-manager-789f6589d5-8xpb5\" (UID: \"e7d69d2d-a07f-4fd5-a9d0-964e22792d42\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8xpb5" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.522246 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8xpb5" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.532714 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvvfb\" (UniqueName: \"kubernetes.io/projected/c1ad2bd7-612e-4f52-a8f3-5338193ee404-kube-api-access-qvvfb\") pod \"machine-approver-56656f9798-4qvnc\" (UID: \"c1ad2bd7-612e-4f52-a8f3-5338193ee404\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4qvnc" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.550023 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:50 crc kubenswrapper[4871]: E0128 15:19:50.550562 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:51.050543003 +0000 UTC m=+142.946381325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.556333 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm7pl\" (UniqueName: \"kubernetes.io/projected/af48622d-dc2b-4a71-8d73-c16573492222-kube-api-access-zm7pl\") pod \"olm-operator-6b444d44fb-wgnps\" (UID: \"af48622d-dc2b-4a71-8d73-c16573492222\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wgnps" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.572837 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pnb92" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.607024 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89h8h\" (UniqueName: \"kubernetes.io/projected/3dcb49be-1798-4698-9d3c-39bf78d992e6-kube-api-access-89h8h\") pod \"downloads-7954f5f757-zdfpg\" (UID: \"3dcb49be-1798-4698-9d3c-39bf78d992e6\") " pod="openshift-console/downloads-7954f5f757-zdfpg" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.610362 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sccph\" (UniqueName: \"kubernetes.io/projected/00278fd9-55b0-4157-aec3-a06cc2b248fd-kube-api-access-sccph\") pod \"ingress-canary-cc8lr\" (UID: \"00278fd9-55b0-4157-aec3-a06cc2b248fd\") " pod="openshift-ingress-canary/ingress-canary-cc8lr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.612139 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h6cm\" (UniqueName: \"kubernetes.io/projected/d5e4044a-73ca-49f4-bd2a-835d862ad993-kube-api-access-7h6cm\") pod \"dns-default-qxtnb\" (UID: \"d5e4044a-73ca-49f4-bd2a-835d862ad993\") " pod="openshift-dns/dns-default-qxtnb" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.643777 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kwvq\" (UniqueName: \"kubernetes.io/projected/8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2-kube-api-access-4kwvq\") pod \"csi-hostpathplugin-fvzpr\" (UID: \"8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2\") " pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.655854 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:50 crc kubenswrapper[4871]: E0128 15:19:50.656199 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:51.156180967 +0000 UTC m=+143.052019289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.665694 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c77ad76-1c21-4ce4-ad9d-88f90aa9b133-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mq867\" (UID: \"6c77ad76-1c21-4ce4-ad9d-88f90aa9b133\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mq867" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.666990 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpk4t\" (UniqueName: \"kubernetes.io/projected/12fb6d93-ae53-42df-933b-7fe134353202-kube-api-access-qpk4t\") pod \"service-ca-operator-777779d784-p4rlg\" (UID: \"12fb6d93-ae53-42df-933b-7fe134353202\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4rlg" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.707126 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2j6j6" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.708326 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b96ct\" (UniqueName: \"kubernetes.io/projected/6c77ad76-1c21-4ce4-ad9d-88f90aa9b133-kube-api-access-b96ct\") pod \"ingress-operator-5b745b69d9-mq867\" (UID: \"6c77ad76-1c21-4ce4-ad9d-88f90aa9b133\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mq867" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.716839 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mq867" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.745675 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4qvnc" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.758914 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:50 crc kubenswrapper[4871]: E0128 15:19:50.759262 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:51.25924869 +0000 UTC m=+143.155087012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.766126 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zdfpg" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.788332 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cz8j" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.799612 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4rlg" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.808734 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wgnps" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.818251 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qnhwk" event={"ID":"2828ed17-56e2-4da0-8e37-2b366d02fbae","Type":"ContainerStarted","Data":"edc3251a14d521cb0ec8d7fc31070f81959480ecb7bd38add5b8f5daaf1788b0"} Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.818329 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qnhwk" event={"ID":"2828ed17-56e2-4da0-8e37-2b366d02fbae","Type":"ContainerStarted","Data":"a42192ce9e03392fd92b4fa3ae5c6bab9a93d5da47fa86e4b275a17c2dd164a2"} Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.831648 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qxtnb" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.848573 4871 generic.go:334] "Generic (PLEG): container finished" podID="736c08e7-ec73-4045-808e-dc00a1cdf894" containerID="c852ad1b8270dcff74c111ff7793bc38529eb27efdca322aed1e2e87c9cc0b3a" exitCode=0 Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.848901 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" event={"ID":"736c08e7-ec73-4045-808e-dc00a1cdf894","Type":"ContainerDied","Data":"c852ad1b8270dcff74c111ff7793bc38529eb27efdca322aed1e2e87c9cc0b3a"} Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.851797 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pnb92" event={"ID":"88db3410-ef84-4e3b-821d-ef64a22c74f0","Type":"ContainerStarted","Data":"d9a1f6d46d1dacff9e6491a0fc4460c1f7476b099c56fc4955d9dce1c4386970"} Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.856886 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.860136 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:50 crc kubenswrapper[4871]: E0128 15:19:50.860680 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:51.360559736 +0000 UTC m=+143.256398058 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.864539 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cc8lr" Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.876881 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-q48tf" event={"ID":"dc34e29b-b869-467b-85a6-aac06d35be0f","Type":"ContainerStarted","Data":"7b8e2e2e79b893cc876a4dd48ad6544365fd39e6e7373c01eaf8aa3e39383367"} Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.876926 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-q48tf" event={"ID":"dc34e29b-b869-467b-85a6-aac06d35be0f","Type":"ContainerStarted","Data":"cf6cf9af318311f53cd5bb761eb3bdb8d1938960c3623830eedf02622927e642"} Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.951895 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vxr9x" event={"ID":"ad8c2aba-da78-44df-a60b-40de6f250df9","Type":"ContainerStarted","Data":"62bf15b9a02e406b217e8d8528b1fe73aef4b0914927aadc4a3630c7cfa653a5"} Jan 28 15:19:50 crc kubenswrapper[4871]: I0128 15:19:50.963289 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:50 crc kubenswrapper[4871]: E0128 15:19:50.966230 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:51.466208441 +0000 UTC m=+143.362046953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.007509 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9qxf4" event={"ID":"95abe15d-d903-4744-87a3-61e27e8bb7e8","Type":"ContainerStarted","Data":"32e2d38079c7d7dd1bffdcbcd8bd8c61cecf1f1ba9681395f9e1e3be026cbd6f"} Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.007867 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9qxf4" event={"ID":"95abe15d-d903-4744-87a3-61e27e8bb7e8","Type":"ContainerStarted","Data":"554c7b097b74df91b0b980b804ceaf940c2334432c7f306f8d59c3d04f00a5ec"} Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.007885 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9qxf4" event={"ID":"95abe15d-d903-4744-87a3-61e27e8bb7e8","Type":"ContainerStarted","Data":"2e85ed147167be2ad22134f843e237a445fb2f6eea8e7f41bbbd5d59f89c407f"} Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.051087 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2lxj6" event={"ID":"75f1faad-49ec-4a72-944b-8857e0752a8c","Type":"ContainerStarted","Data":"1a2a239f9da14c2f441df5ad83fa1a48ed109c0f05424d4a69ac18db7046cae8"} Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.051139 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2lxj6" event={"ID":"75f1faad-49ec-4a72-944b-8857e0752a8c","Type":"ContainerStarted","Data":"4a2155ec1657b2c6b411cbc98defce3e77e22a6480e14a83fdb4f51755370fe4"} Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.059847 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" event={"ID":"1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6","Type":"ContainerStarted","Data":"7f3444257c0454fce7920caaca7a72c463dadca25722f288d55eea1c602e62f3"} Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.061299 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" event={"ID":"1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6","Type":"ContainerStarted","Data":"4aa0d9a0b83bae98e812ce1fc836f456b8366b1c9346e8ec544565bdbee537e0"} Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.070659 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.070816 4871 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-q676l container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.070855 4871 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" podUID="1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.071494 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:51 crc kubenswrapper[4871]: E0128 15:19:51.072882 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:51.572860418 +0000 UTC m=+143.468698740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.150379 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v4rcn"] Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.156060 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5ffkk" event={"ID":"3e91d969-6cd9-40bd-afc2-7861502f0073","Type":"ContainerStarted","Data":"f56bd0e32377c6923d3b701b1f35b34052ef5cd8b36c768b43bec21187b00b94"} Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.156117 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5ffkk" event={"ID":"3e91d969-6cd9-40bd-afc2-7861502f0073","Type":"ContainerStarted","Data":"a93b52c2de31131b0b08d156c22f803f8979f71de6c15e3f7573db8068baa01a"} Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.180178 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-c7255" event={"ID":"e6a80b6c-f05d-400b-adb2-8a1637983435","Type":"ContainerStarted","Data":"990e5817225facab5789d1fbda99793050573d1d17150ffc64e18da9dc40f287"} Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.180228 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-c7255" event={"ID":"e6a80b6c-f05d-400b-adb2-8a1637983435","Type":"ContainerStarted","Data":"870afee44db3338b91331c6919a121606649038a90e083236190590dcaebd80e"} Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.187944 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:51 crc kubenswrapper[4871]: E0128 15:19:51.190021 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:51.689999998 +0000 UTC m=+143.585838500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.239254 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-9q8n5" event={"ID":"d48b65d8-de38-4d1f-9162-3f16e3b8401b","Type":"ContainerStarted","Data":"467ebdfdef6a016cdb1514f11a2f5d56232aaf9d9f83e461ca8528bf8ba6c42e"} Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.239320 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-9q8n5" event={"ID":"d48b65d8-de38-4d1f-9162-3f16e3b8401b","Type":"ContainerStarted","Data":"3023af111dd30822773d34bfaf3e9cb3152a6f113a8f11fa655a3570c5df71b4"} Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.255392 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jn4dg"] Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.256357 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqczj"] Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.269840 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" event={"ID":"e5226d39-f16c-4e81-8ae2-8d5f54a8a683","Type":"ContainerStarted","Data":"2bd4944a1a5d7c2403586d326a0845dd3f136102ea1a011df35342c707489c08"} Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.269926 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" event={"ID":"e5226d39-f16c-4e81-8ae2-8d5f54a8a683","Type":"ContainerStarted","Data":"5bbb7a4551178db27120a612bb76c77062154588e8e9d04831016b89d1d1f074"} Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.271384 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.274085 4871 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bh7lg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.276088 4871 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" podUID="e5226d39-f16c-4e81-8ae2-8d5f54a8a683" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.282548 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-l2q84" event={"ID":"2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce","Type":"ContainerStarted","Data":"eabea6d036ba5125b0a7454222cacc8e68c0ad754045fdc4d188279f923a5600"} Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.282666 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-l2q84" event={"ID":"2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce","Type":"ContainerStarted","Data":"ccf0897d58bbb3ed2c58c5c653a34b4e77057439dc1ab4980ebc830fd9d25b2d"} Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.283078 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-l2q84" Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.284383 4871 patch_prober.go:28] interesting pod/console-operator-58897d9998-l2q84 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.284440 4871 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-l2q84" podUID="2cdb94bb-6eed-48e8-b6fc-7a14d7e2b0ce" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.289798 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:51 crc kubenswrapper[4871]: E0128 15:19:51.290923 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:51.790906302 +0000 UTC m=+143.686744614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.312250 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tglzn"] Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.314672 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-p4lhv"] Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.326718 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ql229" event={"ID":"48a7be4a-2d1b-4b46-a720-4068e3fad906","Type":"ContainerStarted","Data":"0c3e1b0feb63457dbffe3326079e906f6b122a98866b8dfdcb312fac2b70e47b"} Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.327658 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ql229" Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.339151 4871 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ql229 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.339228 4871 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ql229" podUID="48a7be4a-2d1b-4b46-a720-4068e3fad906" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.391085 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:51 crc kubenswrapper[4871]: E0128 15:19:51.394667 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:51.894636016 +0000 UTC m=+143.790474478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.409897 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vkb2v"] Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.435447 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-xv6dw"] Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.492712 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:51 crc kubenswrapper[4871]: E0128 15:19:51.494420 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:51.994381003 +0000 UTC m=+143.890219325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:51 crc kubenswrapper[4871]: W0128 15:19:51.591322 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10252089_5c17_442d_bf25_f6aa45799274.slice/crio-ce713b669d2a8f5e133527cbbf69e22c9f4a7ddfc46ba3c71ccfdd578822c30c WatchSource:0}: Error finding container ce713b669d2a8f5e133527cbbf69e22c9f4a7ddfc46ba3c71ccfdd578822c30c: Status 404 returned error can't find the container with id ce713b669d2a8f5e133527cbbf69e22c9f4a7ddfc46ba3c71ccfdd578822c30c Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.596140 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:51 crc kubenswrapper[4871]: E0128 15:19:51.599575 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:52.099558633 +0000 UTC m=+143.995396955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.700543 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:51 crc kubenswrapper[4871]: E0128 15:19:51.703949 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:52.203918077 +0000 UTC m=+144.099756399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.810316 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:51 crc kubenswrapper[4871]: E0128 15:19:51.811000 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:52.310980877 +0000 UTC m=+144.206819199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.825922 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-vxr9x" podStartSLOduration=122.825897151 podStartE2EDuration="2m2.825897151s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:51.824574389 +0000 UTC m=+143.720412721" watchObservedRunningTime="2026-01-28 15:19:51.825897151 +0000 UTC m=+143.721735463" Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.911864 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:51 crc kubenswrapper[4871]: E0128 15:19:51.912333 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:52.412311563 +0000 UTC m=+144.308149885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:51 crc kubenswrapper[4871]: I0128 15:19:51.977541 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d2f7r"] Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.012023 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lqzgh"] Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.013408 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:52 crc kubenswrapper[4871]: E0128 15:19:52.028415 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:52.528390721 +0000 UTC m=+144.424229043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.031218 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xpk8s"] Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.097708 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-9q8n5" Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.116411 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:52 crc kubenswrapper[4871]: E0128 15:19:52.116681 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:52.616658441 +0000 UTC m=+144.512496763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.116806 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:52 crc kubenswrapper[4871]: E0128 15:19:52.117251 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:52.61724449 +0000 UTC m=+144.513082812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.218573 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:52 crc kubenswrapper[4871]: E0128 15:19:52.219238 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:52.719222518 +0000 UTC m=+144.615060840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.282770 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qnhwk" podStartSLOduration=123.282740931 podStartE2EDuration="2m3.282740931s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:52.279667073 +0000 UTC m=+144.175505395" watchObservedRunningTime="2026-01-28 15:19:52.282740931 +0000 UTC m=+144.178579253" Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.315138 4871 patch_prober.go:28] interesting pod/router-default-5444994796-9q8n5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:19:52 crc kubenswrapper[4871]: [-]has-synced failed: reason withheld Jan 28 15:19:52 crc kubenswrapper[4871]: [+]process-running ok Jan 28 15:19:52 crc kubenswrapper[4871]: healthz check failed Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.315217 4871 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9q8n5" podUID="d48b65d8-de38-4d1f-9162-3f16e3b8401b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.332721 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:52 crc kubenswrapper[4871]: E0128 15:19:52.333200 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:52.833171647 +0000 UTC m=+144.729009969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.427376 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" event={"ID":"5409ad1c-96fd-4c09-aeed-7a9722b0abc1","Type":"ContainerStarted","Data":"d01a4e6afc24670e82e708f4c0d58b2baef01ab3a2fd64305f9cb200d88a87a7"} Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.434505 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:52 crc kubenswrapper[4871]: E0128 15:19:52.434640 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:52.934621049 +0000 UTC m=+144.830459371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.435127 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:52 crc kubenswrapper[4871]: E0128 15:19:52.435625 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:52.935612741 +0000 UTC m=+144.831451063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.457212 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4qvnc" event={"ID":"c1ad2bd7-612e-4f52-a8f3-5338193ee404","Type":"ContainerStarted","Data":"9f516980cd0c1650cc16e38098b40d48baff575a935bd3cf3e8b0012ee340057"} Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.514129 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-9q8n5" podStartSLOduration=123.514103349 podStartE2EDuration="2m3.514103349s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:52.394391557 +0000 UTC m=+144.290229879" watchObservedRunningTime="2026-01-28 15:19:52.514103349 +0000 UTC m=+144.409941671" Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.528845 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lqzgh" event={"ID":"28dc3b19-c5e4-4de6-889a-043b95a5f0f2","Type":"ContainerStarted","Data":"f5e87fcd11d7b39e01f3d2e2b7659292422d155e87e214ca1af83ab516808a0b"} Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.538085 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" podStartSLOduration=123.538062053 podStartE2EDuration="2m3.538062053s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:52.51598566 +0000 UTC m=+144.411823982" watchObservedRunningTime="2026-01-28 15:19:52.538062053 +0000 UTC m=+144.433900385" Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.538240 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2lxj6" podStartSLOduration=123.538233548 podStartE2EDuration="2m3.538233548s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:52.487006037 +0000 UTC m=+144.382844359" watchObservedRunningTime="2026-01-28 15:19:52.538233548 +0000 UTC m=+144.434071870" Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.540742 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:52 crc kubenswrapper[4871]: E0128 15:19:52.541857 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:53.041637937 +0000 UTC m=+144.937476259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.549480 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d2f7r" event={"ID":"91bad959-dee9-48f7-90c6-7d7462b8cf9f","Type":"ContainerStarted","Data":"caba5bb90049627f5902f8a4a92783fd4041bbd2cd3a6b6b47ac2ae2f96263e6"} Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.566150 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" podStartSLOduration=123.566123347 podStartE2EDuration="2m3.566123347s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:52.548628939 +0000 UTC m=+144.444467261" watchObservedRunningTime="2026-01-28 15:19:52.566123347 +0000 UTC m=+144.461961669" Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.576503 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vxr9x" event={"ID":"ad8c2aba-da78-44df-a60b-40de6f250df9","Type":"ContainerStarted","Data":"58daedad4aeaeccbd1db6106de421e4549fe7f8979e7669520489a8840abcea4"} Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.578549 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqczj" event={"ID":"e058d636-1ce7-480a-a798-378083e7edf9","Type":"ContainerStarted","Data":"4668b68fee62a0b76a693f6b71ecf657c4140d94d70e0cfb4b28a640245007d6"} Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.584077 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-c7255" podStartSLOduration=123.584053787 podStartE2EDuration="2m3.584053787s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:52.583344765 +0000 UTC m=+144.479183077" watchObservedRunningTime="2026-01-28 15:19:52.584053787 +0000 UTC m=+144.479892109" Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.623569 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ql229" podStartSLOduration=123.623552835 podStartE2EDuration="2m3.623552835s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:52.622157671 +0000 UTC m=+144.517995993" watchObservedRunningTime="2026-01-28 15:19:52.623552835 +0000 UTC m=+144.519391157" Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.632938 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tglzn" event={"ID":"10252089-5c17-442d-bf25-f6aa45799274","Type":"ContainerStarted","Data":"ce713b669d2a8f5e133527cbbf69e22c9f4a7ddfc46ba3c71ccfdd578822c30c"} Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.644011 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:52 crc kubenswrapper[4871]: E0128 15:19:52.645033 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:53.14501618 +0000 UTC m=+145.040854502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.651541 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v4rcn" event={"ID":"80a59a1a-e3b3-4e09-8be9-275fe4d95dff","Type":"ContainerStarted","Data":"0b3c81d0dec03d38e3a1e19822a3d23134692d9e9ee30bb9e4ae750de98d28e0"} Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.688159 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9qxf4" podStartSLOduration=123.688135012 podStartE2EDuration="2m3.688135012s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:52.661287678 +0000 UTC m=+144.557126000" watchObservedRunningTime="2026-01-28 15:19:52.688135012 +0000 UTC m=+144.583973354" Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.691085 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-q48tf" event={"ID":"dc34e29b-b869-467b-85a6-aac06d35be0f","Type":"ContainerStarted","Data":"9c1218020c7f7303340c58b1743bbb4daf8c8e5d0abb2a41a8be2b2a6bebd53a"} Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.722662 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5ffkk" event={"ID":"3e91d969-6cd9-40bd-afc2-7861502f0073","Type":"ContainerStarted","Data":"db226c87e4a85e2d50f4c8a4cc4045daa0e30c16a76d339d0d7aaf5bab5fe429"} Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.741500 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-p4lhv" event={"ID":"dc917f74-9ee2-4f96-baa1-9bc802c0d448","Type":"ContainerStarted","Data":"dbf252ca1ffe44724a64a20b987f3a93bb79f97fee46420376586ced0c491a2a"} Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.741542 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-p4lhv" event={"ID":"dc917f74-9ee2-4f96-baa1-9bc802c0d448","Type":"ContainerStarted","Data":"51add3b1931a8d5177f4e07b53b2aaa143657649a35342c4042570d5860b11ff"} Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.744548 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:52 crc kubenswrapper[4871]: E0128 15:19:52.745708 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:53.245686136 +0000 UTC m=+145.141524458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.750878 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pnb92" event={"ID":"88db3410-ef84-4e3b-821d-ef64a22c74f0","Type":"ContainerStarted","Data":"e2f5935769533ca0201809528cc87b7d1a3b282aea6c7f819c1f694ed0cabbb6"} Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.765258 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5ffkk" podStartSLOduration=123.765234808 podStartE2EDuration="2m3.765234808s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:52.7351468 +0000 UTC m=+144.630985122" watchObservedRunningTime="2026-01-28 15:19:52.765234808 +0000 UTC m=+144.661073130" Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.765551 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-l2q84" podStartSLOduration=124.765546408 podStartE2EDuration="2m4.765546408s" podCreationTimestamp="2026-01-28 15:17:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:52.763121511 +0000 UTC m=+144.658959833" watchObservedRunningTime="2026-01-28 15:19:52.765546408 +0000 UTC m=+144.661384730" Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.785763 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8xpb5"] Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.787895 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vkb2v" event={"ID":"7128a8bf-85ad-45ed-bb6f-9efc037aca2b","Type":"ContainerStarted","Data":"235897aa6dcae0e47a77600e4311f5751029bfd58a98a3c749209f1082500d23"} Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.802705 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xv6dw" event={"ID":"4acd9e6a-53ba-41a6-9b34-0c93dd921150","Type":"ContainerStarted","Data":"a519bbae974720ba303b6a81d945eaa30ce92c75f145f28651b8173bfce67e57"} Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.805080 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-pnb92" podStartSLOduration=5.8050581470000004 podStartE2EDuration="5.805058147s" podCreationTimestamp="2026-01-28 15:19:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:52.777437276 +0000 UTC m=+144.673275598" watchObservedRunningTime="2026-01-28 15:19:52.805058147 +0000 UTC m=+144.700896469" Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.808548 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.811565 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ql229" Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.819362 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.821648 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-j85fr"] Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.821726 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-l2q84" Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.833654 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-q48tf" podStartSLOduration=123.833622966 podStartE2EDuration="2m3.833622966s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:52.805685646 +0000 UTC m=+144.701523968" watchObservedRunningTime="2026-01-28 15:19:52.833622966 +0000 UTC m=+144.729461288" Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.857815 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.860220 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jhdgw"] Jan 28 15:19:52 crc kubenswrapper[4871]: E0128 15:19:52.861947 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:53.361933578 +0000 UTC m=+145.257771900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.868632 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493555-5cqh5"] Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.883645 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dzwqq"] Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.950862 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzvt5"] Jan 28 15:19:52 crc kubenswrapper[4871]: I0128 15:19:52.959079 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:52 crc kubenswrapper[4871]: E0128 15:19:52.959405 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:53.459389681 +0000 UTC m=+145.355228003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.040658 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2j6j6"] Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.062639 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:53 crc kubenswrapper[4871]: E0128 15:19:53.062999 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:53.562987781 +0000 UTC m=+145.458826103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.139572 4871 patch_prober.go:28] interesting pod/router-default-5444994796-9q8n5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:19:53 crc kubenswrapper[4871]: [-]has-synced failed: reason withheld Jan 28 15:19:53 crc kubenswrapper[4871]: [+]process-running ok Jan 28 15:19:53 crc kubenswrapper[4871]: healthz check failed Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.139628 4871 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9q8n5" podUID="d48b65d8-de38-4d1f-9162-3f16e3b8401b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.165114 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:53 crc kubenswrapper[4871]: E0128 15:19:53.165473 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:53.665456295 +0000 UTC m=+145.561294617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.219289 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zthwv"] Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.250644 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mq867"] Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.251973 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qxtnb"] Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.269295 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:53 crc kubenswrapper[4871]: E0128 15:19:53.269678 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:53.769665144 +0000 UTC m=+145.665503466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.310430 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cz8j"] Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.333490 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zdfpg"] Jan 28 15:19:53 crc kubenswrapper[4871]: W0128 15:19:53.342677 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5e4044a_73ca_49f4_bd2a_835d862ad993.slice/crio-cffb9e1c729e4c07900dab9e807309be4902c8d2b1871a6056f887a82b2b364e WatchSource:0}: Error finding container cffb9e1c729e4c07900dab9e807309be4902c8d2b1871a6056f887a82b2b364e: Status 404 returned error can't find the container with id cffb9e1c729e4c07900dab9e807309be4902c8d2b1871a6056f887a82b2b364e Jan 28 15:19:53 crc kubenswrapper[4871]: W0128 15:19:53.343656 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c77ad76_1c21_4ce4_ad9d_88f90aa9b133.slice/crio-eb69eb2c3bcda6bcd035aaf6b2e6a4174697c801b8bd2d4b793e41b2cc9e665a WatchSource:0}: Error finding container eb69eb2c3bcda6bcd035aaf6b2e6a4174697c801b8bd2d4b793e41b2cc9e665a: Status 404 returned error can't find the container with id eb69eb2c3bcda6bcd035aaf6b2e6a4174697c801b8bd2d4b793e41b2cc9e665a Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.354716 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wgnps"] Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.371157 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:53 crc kubenswrapper[4871]: E0128 15:19:53.371438 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:53.871423114 +0000 UTC m=+145.767261436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.375741 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-p4rlg"] Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.398404 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cc8lr"] Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.424160 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-fvzpr"] Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.472141 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:53 crc kubenswrapper[4871]: E0128 15:19:53.472493 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:53.972480723 +0000 UTC m=+145.868319035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:53 crc kubenswrapper[4871]: W0128 15:19:53.491747 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bf07a92_00a1_4c1c_ab4d_d40286c2d7a2.slice/crio-ccb53660e1016adae71368e03533fadbc8f5202b9df12f9a0dd567c01df6e51e WatchSource:0}: Error finding container ccb53660e1016adae71368e03533fadbc8f5202b9df12f9a0dd567c01df6e51e: Status 404 returned error can't find the container with id ccb53660e1016adae71368e03533fadbc8f5202b9df12f9a0dd567c01df6e51e Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.575175 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:53 crc kubenswrapper[4871]: E0128 15:19:53.575642 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:54.075625548 +0000 UTC m=+145.971463870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.681313 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:53 crc kubenswrapper[4871]: E0128 15:19:53.681920 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:54.181902882 +0000 UTC m=+146.077741194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.784425 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:53 crc kubenswrapper[4871]: E0128 15:19:53.785007 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:54.284983246 +0000 UTC m=+146.180821568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.831767 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tglzn" event={"ID":"10252089-5c17-442d-bf25-f6aa45799274","Type":"ContainerStarted","Data":"eb6f563378d9ed9158fe29b1b2fa534a7ba22132cceed49b8fe87e67f43e0bdc"} Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.831834 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tglzn" event={"ID":"10252089-5c17-442d-bf25-f6aa45799274","Type":"ContainerStarted","Data":"02ca96f9e2008292d56e83c19bb2a2db613e7d1f007cb26ec2b730e730ac2daa"} Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.868449 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2j6j6" event={"ID":"09d59f6a-1c35-407f-b590-ad87ba92da70","Type":"ContainerStarted","Data":"aebb7dd4d39b8936be2da44399f99bea39a5a9d43b37af5eea33ab56eeb513b9"} Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.874734 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4rlg" event={"ID":"12fb6d93-ae53-42df-933b-7fe134353202","Type":"ContainerStarted","Data":"7bf50e98141a548a363e1b9bbae19373d392f58bb2131e470d9793cb30a4d865"} Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.887412 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:53 crc kubenswrapper[4871]: E0128 15:19:53.887742 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:54.387728848 +0000 UTC m=+146.283567170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.907068 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" event={"ID":"8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2","Type":"ContainerStarted","Data":"ccb53660e1016adae71368e03533fadbc8f5202b9df12f9a0dd567c01df6e51e"} Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.929926 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cz8j" event={"ID":"e82eb460-ef08-480a-93a3-7ed6a33ea0ac","Type":"ContainerStarted","Data":"127ed73d72f28503a06da85a981cbd58e614579462eac0c1838f63b2c2ce274e"} Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.989134 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.997323 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v4rcn" event={"ID":"80a59a1a-e3b3-4e09-8be9-275fe4d95dff","Type":"ContainerStarted","Data":"04a95189517ca8dde70466d911846cfd6c21473d30c1c082bbe611889a06bee9"} Jan 28 15:19:53 crc kubenswrapper[4871]: I0128 15:19:53.997383 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v4rcn" event={"ID":"80a59a1a-e3b3-4e09-8be9-275fe4d95dff","Type":"ContainerStarted","Data":"d6b6becb40e46d5d4376aa74fe3f4b6c29cb1a9a23c09e111ea78f31e4bb2810"} Jan 28 15:19:53 crc kubenswrapper[4871]: E0128 15:19:53.997910 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:54.490064377 +0000 UTC m=+146.385902689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.022648 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lqzgh" event={"ID":"28dc3b19-c5e4-4de6-889a-043b95a5f0f2","Type":"ContainerStarted","Data":"490dad8085dfe25a7c85aee1bbb1e0a8d90e5fb7f63a442ace29449dfa8a7c46"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.040661 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qxtnb" event={"ID":"d5e4044a-73ca-49f4-bd2a-835d862ad993","Type":"ContainerStarted","Data":"cffb9e1c729e4c07900dab9e807309be4902c8d2b1871a6056f887a82b2b364e"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.046900 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4qvnc" event={"ID":"c1ad2bd7-612e-4f52-a8f3-5338193ee404","Type":"ContainerStarted","Data":"6379a0a46715b786268a902f76a1a5fcb6b8a96d909a8c745d04459200a5bf10"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.046948 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4qvnc" event={"ID":"c1ad2bd7-612e-4f52-a8f3-5338193ee404","Type":"ContainerStarted","Data":"0ecc9cb9860734206823e33161044b359d15bdb03a188f5106080dc88a78f7ad"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.067265 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v4rcn" podStartSLOduration=126.067221075 podStartE2EDuration="2m6.067221075s" podCreationTimestamp="2026-01-28 15:17:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:54.065230831 +0000 UTC m=+145.961069153" watchObservedRunningTime="2026-01-28 15:19:54.067221075 +0000 UTC m=+145.963059397" Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.077735 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jhdgw" event={"ID":"03ea1439-2fa7-44a0-ba99-457655fde2a6","Type":"ContainerStarted","Data":"cdc51c347986d652a23a8594e41b4b19e978a34ad8ff42dcb99b3c90c5d20b2d"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.077775 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jhdgw" event={"ID":"03ea1439-2fa7-44a0-ba99-457655fde2a6","Type":"ContainerStarted","Data":"4d1e52c9bfc077f8b7929ee7e0869abd061733c3aea18848ab93cccb1891a148"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.091552 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:54 crc kubenswrapper[4871]: E0128 15:19:54.093412 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:54.593399268 +0000 UTC m=+146.489237590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.098483 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lqzgh" podStartSLOduration=125.09846003 podStartE2EDuration="2m5.09846003s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:54.095145174 +0000 UTC m=+145.990983496" watchObservedRunningTime="2026-01-28 15:19:54.09846003 +0000 UTC m=+145.994298362" Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.111118 4871 patch_prober.go:28] interesting pod/router-default-5444994796-9q8n5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:19:54 crc kubenswrapper[4871]: [-]has-synced failed: reason withheld Jan 28 15:19:54 crc kubenswrapper[4871]: [+]process-running ok Jan 28 15:19:54 crc kubenswrapper[4871]: healthz check failed Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.111180 4871 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9q8n5" podUID="d48b65d8-de38-4d1f-9162-3f16e3b8401b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.115057 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" event={"ID":"736c08e7-ec73-4045-808e-dc00a1cdf894","Type":"ContainerStarted","Data":"6b8057df66ab0f88e9de86965955c5acdc7a9a3ba11f5e7f31bc4a3e22d95191"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.125420 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4qvnc" podStartSLOduration=126.125402528 podStartE2EDuration="2m6.125402528s" podCreationTimestamp="2026-01-28 15:17:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:54.123986062 +0000 UTC m=+146.019824384" watchObservedRunningTime="2026-01-28 15:19:54.125402528 +0000 UTC m=+146.021240850" Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.153966 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" podStartSLOduration=125.153947917 podStartE2EDuration="2m5.153947917s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:54.153005137 +0000 UTC m=+146.048843459" watchObservedRunningTime="2026-01-28 15:19:54.153947917 +0000 UTC m=+146.049786239" Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.158307 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-p4lhv" event={"ID":"dc917f74-9ee2-4f96-baa1-9bc802c0d448","Type":"ContainerStarted","Data":"2279b21e8255039f1609d3619eee4d0a74b2177a9dff9e10c875ec9d62f25f05"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.192632 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:54 crc kubenswrapper[4871]: E0128 15:19:54.192872 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:54.692830965 +0000 UTC m=+146.588669287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.193646 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:54 crc kubenswrapper[4871]: E0128 15:19:54.196216 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:54.696178612 +0000 UTC m=+146.592017134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.209203 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8xpb5" event={"ID":"e7d69d2d-a07f-4fd5-a9d0-964e22792d42","Type":"ContainerStarted","Data":"c18b466566cfd324389b0d1c424a12dd31ba37ee310736f78289327712cc8c8b"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.209268 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8xpb5" event={"ID":"e7d69d2d-a07f-4fd5-a9d0-964e22792d42","Type":"ContainerStarted","Data":"42eb22bcb05212e7611c975eb1870c167860a427060224be13814c9d4912f114"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.249526 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jhdgw" podStartSLOduration=125.249503271 podStartE2EDuration="2m5.249503271s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:54.208474083 +0000 UTC m=+146.104312415" watchObservedRunningTime="2026-01-28 15:19:54.249503271 +0000 UTC m=+146.145341593" Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.291355 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-5cqh5" event={"ID":"ad449099-f3be-4711-8c75-a8fab2eabda3","Type":"ContainerStarted","Data":"e5e08a4815f21ec7ad5f2c92ee45e8648c497f8661d58d7ab3d2e8f9d54fd623"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.294361 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:54 crc kubenswrapper[4871]: E0128 15:19:54.294812 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:54.794787652 +0000 UTC m=+146.690625974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.314641 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zdfpg" event={"ID":"3dcb49be-1798-4698-9d3c-39bf78d992e6","Type":"ContainerStarted","Data":"44547c284758d2cc4d039731cd7635b2a1fb7804817960f7a3854d3ba4c8eace"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.315307 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-zdfpg" Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.319724 4871 patch_prober.go:28] interesting pod/downloads-7954f5f757-zdfpg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.319781 4871 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zdfpg" podUID="3dcb49be-1798-4698-9d3c-39bf78d992e6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.323152 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wgnps" event={"ID":"af48622d-dc2b-4a71-8d73-c16573492222","Type":"ContainerStarted","Data":"4b5ffe0d4178476cb1ba0624cce50437ecb8b0471968a48a9d8c1deafd23951a"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.342122 4871 generic.go:334] "Generic (PLEG): container finished" podID="7128a8bf-85ad-45ed-bb6f-9efc037aca2b" containerID="10e59812db1deb0ea66b8522a6693d265ec84a42b818eaf468413a54e0fb8ec7" exitCode=0 Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.342249 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vkb2v" event={"ID":"7128a8bf-85ad-45ed-bb6f-9efc037aca2b","Type":"ContainerDied","Data":"10e59812db1deb0ea66b8522a6693d265ec84a42b818eaf468413a54e0fb8ec7"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.345689 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-p4lhv" podStartSLOduration=125.345646482 podStartE2EDuration="2m5.345646482s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:54.254545121 +0000 UTC m=+146.150383453" watchObservedRunningTime="2026-01-28 15:19:54.345646482 +0000 UTC m=+146.241484804" Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.353404 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cc8lr" event={"ID":"00278fd9-55b0-4157-aec3-a06cc2b248fd","Type":"ContainerStarted","Data":"038a8c41eb9811d7d6385289b0e90e951f93fd8ca4d4995998673266c4494a0f"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.360881 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.361639 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.372849 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" event={"ID":"2aa3d7cc-57c4-420c-bb92-e7fc4525a763","Type":"ContainerStarted","Data":"5f4ff2c4065c78d258947fcc60bf52c982e6edc0f82d96b59a4b73ed8af7fa9f"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.383542 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-5cqh5" podStartSLOduration=126.383513548 podStartE2EDuration="2m6.383513548s" podCreationTimestamp="2026-01-28 15:17:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:54.351903661 +0000 UTC m=+146.247741983" watchObservedRunningTime="2026-01-28 15:19:54.383513548 +0000 UTC m=+146.279351880" Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.387868 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xv6dw" event={"ID":"4acd9e6a-53ba-41a6-9b34-0c93dd921150","Type":"ContainerStarted","Data":"0ac6a2b41c7a045ef664e485b4bb44110ca0ee6272d83c9bdd7605eae0cf3ba9"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.387915 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xv6dw" event={"ID":"4acd9e6a-53ba-41a6-9b34-0c93dd921150","Type":"ContainerStarted","Data":"1ffd8463b7b3c609e12c027f36f2315aa632bfcf932ddd15d97873a7f5b93ca9"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.389175 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-j85fr" event={"ID":"2571452b-5b45-43d1-bd39-35ef29c4fe80","Type":"ContainerStarted","Data":"9b744af10254a55a98bb0ffab18f7efb716e2df3f880ee5219d058d12b952e7b"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.391878 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-zdfpg" podStartSLOduration=125.391864084 podStartE2EDuration="2m5.391864084s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:54.382536407 +0000 UTC m=+146.278374729" watchObservedRunningTime="2026-01-28 15:19:54.391864084 +0000 UTC m=+146.287702406" Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.396027 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:54 crc kubenswrapper[4871]: E0128 15:19:54.403231 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:54.903190145 +0000 UTC m=+146.799028467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.416649 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xpk8s" event={"ID":"ca237fb9-41ce-45a3-bc60-7439787888da","Type":"ContainerStarted","Data":"0f78cfb98c98ed7b1daa07ff7e1a501ef38db0291a7babd3b6148bbb8ae953eb"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.416691 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xpk8s" event={"ID":"ca237fb9-41ce-45a3-bc60-7439787888da","Type":"ContainerStarted","Data":"3b0fc1e306cb2ab32295a99227adb7532747c9a4c95d03c832d4a7990439abe3"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.420855 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-zthwv" event={"ID":"9780dff9-f003-484a-af82-94fd7cd97a32","Type":"ContainerStarted","Data":"2677a57784e9e500227a6f655b44cd604ca143294fdd120210b26dd3701e66fc"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.502870 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.503150 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mq867" event={"ID":"6c77ad76-1c21-4ce4-ad9d-88f90aa9b133","Type":"ContainerStarted","Data":"eb69eb2c3bcda6bcd035aaf6b2e6a4174697c801b8bd2d4b793e41b2cc9e665a"} Jan 28 15:19:54 crc kubenswrapper[4871]: E0128 15:19:54.503486 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:55.003469659 +0000 UTC m=+146.899307981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.503720 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:54 crc kubenswrapper[4871]: E0128 15:19:54.505349 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:55.005339788 +0000 UTC m=+146.901178110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.512250 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzvt5" event={"ID":"514e10d9-70a0-4a80-bd56-349522fad444","Type":"ContainerStarted","Data":"8f6c6eef6085b5963590dc45b5e663139ac3d42d5ea6692712deeba3e10f1ec7"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.512802 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzvt5" Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.520254 4871 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-mzvt5 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.520316 4871 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzvt5" podUID="514e10d9-70a0-4a80-bd56-349522fad444" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.523026 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d2f7r" event={"ID":"91bad959-dee9-48f7-90c6-7d7462b8cf9f","Type":"ContainerStarted","Data":"9499a2a88fad8ce1bf6d04de48a145e0b4cfd509760d0004bcb6b7fea9f9a295"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.534941 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xv6dw" podStartSLOduration=125.534921751 podStartE2EDuration="2m5.534921751s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:54.43254495 +0000 UTC m=+146.328383282" watchObservedRunningTime="2026-01-28 15:19:54.534921751 +0000 UTC m=+146.430760073" Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.543231 4871 generic.go:334] "Generic (PLEG): container finished" podID="5409ad1c-96fd-4c09-aeed-7a9722b0abc1" containerID="1ff420fc160748c022e64e20192f10ede45f0fc425b76cb46135361b72ced1aa" exitCode=0 Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.543341 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" event={"ID":"5409ad1c-96fd-4c09-aeed-7a9722b0abc1","Type":"ContainerDied","Data":"1ff420fc160748c022e64e20192f10ede45f0fc425b76cb46135361b72ced1aa"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.557506 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqczj" event={"ID":"e058d636-1ce7-480a-a798-378083e7edf9","Type":"ContainerStarted","Data":"5215b5f55b8496aef3a86e89f46d05855bbf73317b8bdc5c5bef523b9de8dffe"} Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.560003 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-j85fr" podStartSLOduration=126.559981829 podStartE2EDuration="2m6.559981829s" podCreationTimestamp="2026-01-28 15:17:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:54.534438745 +0000 UTC m=+146.430277067" watchObservedRunningTime="2026-01-28 15:19:54.559981829 +0000 UTC m=+146.455820151" Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.611334 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d2f7r" podStartSLOduration=125.611315423 podStartE2EDuration="2m5.611315423s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:54.560805914 +0000 UTC m=+146.456644236" watchObservedRunningTime="2026-01-28 15:19:54.611315423 +0000 UTC m=+146.507153745" Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.612209 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:54 crc kubenswrapper[4871]: E0128 15:19:54.612497 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:55.112483151 +0000 UTC m=+147.008321473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.612975 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xpk8s" podStartSLOduration=125.612968306 podStartE2EDuration="2m5.612968306s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:54.611705716 +0000 UTC m=+146.507544038" watchObservedRunningTime="2026-01-28 15:19:54.612968306 +0000 UTC m=+146.508806628" Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.645404 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-zthwv" podStartSLOduration=125.645388148 podStartE2EDuration="2m5.645388148s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:54.644411558 +0000 UTC m=+146.540249880" watchObservedRunningTime="2026-01-28 15:19:54.645388148 +0000 UTC m=+146.541226470" Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.677788 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzvt5" podStartSLOduration=125.67777476 podStartE2EDuration="2m5.67777476s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:54.676988605 +0000 UTC m=+146.572826927" watchObservedRunningTime="2026-01-28 15:19:54.67777476 +0000 UTC m=+146.573613072" Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.714122 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:54 crc kubenswrapper[4871]: E0128 15:19:54.717948 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:55.217933949 +0000 UTC m=+147.113772271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.778681 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqczj" podStartSLOduration=125.778664764 podStartE2EDuration="2m5.778664764s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:54.778175728 +0000 UTC m=+146.674014050" watchObservedRunningTime="2026-01-28 15:19:54.778664764 +0000 UTC m=+146.674503086" Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.819472 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:54 crc kubenswrapper[4871]: E0128 15:19:54.819909 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:55.319881706 +0000 UTC m=+147.215720028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.850924 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:54 crc kubenswrapper[4871]: I0128 15:19:54.923441 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:54 crc kubenswrapper[4871]: E0128 15:19:54.923853 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:55.423841427 +0000 UTC m=+147.319679749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.025925 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:55 crc kubenswrapper[4871]: E0128 15:19:55.026450 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:55.526434914 +0000 UTC m=+147.422273236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.100341 4871 patch_prober.go:28] interesting pod/router-default-5444994796-9q8n5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:19:55 crc kubenswrapper[4871]: [-]has-synced failed: reason withheld Jan 28 15:19:55 crc kubenswrapper[4871]: [+]process-running ok Jan 28 15:19:55 crc kubenswrapper[4871]: healthz check failed Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.100403 4871 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9q8n5" podUID="d48b65d8-de38-4d1f-9162-3f16e3b8401b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.129246 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:55 crc kubenswrapper[4871]: E0128 15:19:55.129600 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:55.629575309 +0000 UTC m=+147.525413631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.230159 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:55 crc kubenswrapper[4871]: E0128 15:19:55.230956 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:55.730940398 +0000 UTC m=+147.626778720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.331710 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:55 crc kubenswrapper[4871]: E0128 15:19:55.332001 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:55.831990166 +0000 UTC m=+147.727828478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.433292 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:55 crc kubenswrapper[4871]: E0128 15:19:55.433717 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:55.933701516 +0000 UTC m=+147.829539838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.534801 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:55 crc kubenswrapper[4871]: E0128 15:19:55.535190 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:56.035170107 +0000 UTC m=+147.931008459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.569333 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qxtnb" event={"ID":"d5e4044a-73ca-49f4-bd2a-835d862ad993","Type":"ContainerStarted","Data":"bc98e1b7324612e82d8c99a9ecbfef5df5436a518616ed54b43dd6ef91f4d5bf"} Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.569379 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qxtnb" event={"ID":"d5e4044a-73ca-49f4-bd2a-835d862ad993","Type":"ContainerStarted","Data":"53a9df9ced47a049429360a0d6178d1182fa8ba2554b9cadae028302f9e4b15c"} Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.570246 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-qxtnb" Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.573912 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vkb2v" event={"ID":"7128a8bf-85ad-45ed-bb6f-9efc037aca2b","Type":"ContainerStarted","Data":"a8201ea144c96101ac145995838c75371f9931628080bb3e117bbb0b6a568c2e"} Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.574367 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vkb2v" Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.576731 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cc8lr" event={"ID":"00278fd9-55b0-4157-aec3-a06cc2b248fd","Type":"ContainerStarted","Data":"de14fdc99fa8656adaa4741cfa670f8c184a53d5bf51504107f093829b41e2b3"} Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.578670 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" event={"ID":"2aa3d7cc-57c4-420c-bb92-e7fc4525a763","Type":"ContainerStarted","Data":"92b0c7bed4c3ff927ec53a56851fb7ddab37d7d69269ca428e40f7c5c8fa2d4e"} Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.579275 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.582812 4871 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-dzwqq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" start-of-body= Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.582873 4871 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" podUID="2aa3d7cc-57c4-420c-bb92-e7fc4525a763" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.585031 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zdfpg" event={"ID":"3dcb49be-1798-4698-9d3c-39bf78d992e6","Type":"ContainerStarted","Data":"76b30c6e7fad58c33c142333df81b69552acae221d855073e0b20a3f7c9a6eee"} Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.585876 4871 patch_prober.go:28] interesting pod/downloads-7954f5f757-zdfpg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.585934 4871 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zdfpg" podUID="3dcb49be-1798-4698-9d3c-39bf78d992e6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.587105 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4rlg" event={"ID":"12fb6d93-ae53-42df-933b-7fe134353202","Type":"ContainerStarted","Data":"698dc3bf83622aa20d7c3571a37730bda0aacdc2cc75eec4d26a55d894f17341"} Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.588598 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-j85fr" event={"ID":"2571452b-5b45-43d1-bd39-35ef29c4fe80","Type":"ContainerStarted","Data":"671c6dcd0413a2882adf49dddda82507d1829f3b1472b50446f3711bb49c15e6"} Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.589969 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wgnps" event={"ID":"af48622d-dc2b-4a71-8d73-c16573492222","Type":"ContainerStarted","Data":"00e3a1d4ac07a1315c5ec04920afc8d243096dca4236de386bb3e1bf4039f2fc"} Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.590510 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wgnps" Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.591447 4871 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-wgnps container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.591477 4871 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wgnps" podUID="af48622d-dc2b-4a71-8d73-c16573492222" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.592462 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2j6j6" event={"ID":"09d59f6a-1c35-407f-b590-ad87ba92da70","Type":"ContainerStarted","Data":"b29f5dd3927e300935231584e911ca5219f401f55418671c4519f19b39048de0"} Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.594221 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mq867" event={"ID":"6c77ad76-1c21-4ce4-ad9d-88f90aa9b133","Type":"ContainerStarted","Data":"e35baf29077b1c09bcb67591c4b162f0947a3d88fcb4803f7678a6e2eadc0adb"} Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.594244 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mq867" event={"ID":"6c77ad76-1c21-4ce4-ad9d-88f90aa9b133","Type":"ContainerStarted","Data":"204a037b6b7b06a6ad65f9eaddd4242858577a95f4d2328d5123266644cc11c0"} Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.596985 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-qxtnb" podStartSLOduration=8.596962675 podStartE2EDuration="8.596962675s" podCreationTimestamp="2026-01-28 15:19:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:55.595455587 +0000 UTC m=+147.491293909" watchObservedRunningTime="2026-01-28 15:19:55.596962675 +0000 UTC m=+147.492800997" Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.597279 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" event={"ID":"5409ad1c-96fd-4c09-aeed-7a9722b0abc1","Type":"ContainerStarted","Data":"3e48ee3129934bc7d70e3436ca5ba06773c9657967b77d618621f6656a283af7"} Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.597319 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" event={"ID":"5409ad1c-96fd-4c09-aeed-7a9722b0abc1","Type":"ContainerStarted","Data":"ef48d82848608ec9d4b553bf408364b809df87a21e8dde75dd331451aeb5eed8"} Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.599348 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cz8j" event={"ID":"e82eb460-ef08-480a-93a3-7ed6a33ea0ac","Type":"ContainerStarted","Data":"ca886f4d89eeee2e354c426e8d8a4e03d5642427a9fee84e9425641590db7ade"} Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.600152 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cz8j" Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.601237 4871 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6cz8j container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.601282 4871 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cz8j" podUID="e82eb460-ef08-480a-93a3-7ed6a33ea0ac" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.604302 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8xpb5" event={"ID":"e7d69d2d-a07f-4fd5-a9d0-964e22792d42","Type":"ContainerStarted","Data":"9ca65e1e9ae9dfb3d75655d111f2d517f8a8fad9666eec826ac60235c5afcf3d"} Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.604400 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8xpb5" Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.605517 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzvt5" event={"ID":"514e10d9-70a0-4a80-bd56-349522fad444","Type":"ContainerStarted","Data":"dde9d7a97872e861406cca4fb59d88f90159b69adfc39e4536075ec430cf0b8c"} Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.609546 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-5cqh5" event={"ID":"ad449099-f3be-4711-8c75-a8fab2eabda3","Type":"ContainerStarted","Data":"79cc46a1eca7a4e33f4dd296c5858b950b28cd2fd1673f2a40414cd660aed505"} Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.610806 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-zthwv" event={"ID":"9780dff9-f003-484a-af82-94fd7cd97a32","Type":"ContainerStarted","Data":"026672bdcdf825b8211ee79d09de394f8f07d5c86d690733f6c4d7cfe253772c"} Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.612733 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" event={"ID":"8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2","Type":"ContainerStarted","Data":"4b1d9bee212c1419643729d41ca73009de4274f50a743f91a15bb4c37a2280a3"} Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.615204 4871 csr.go:261] certificate signing request csr-bc25r is approved, waiting to be issued Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.622122 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jtc4s" Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.623395 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzvt5" Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.626937 4871 csr.go:257] certificate signing request csr-bc25r is issued Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.628715 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vkb2v" podStartSLOduration=127.628693916 podStartE2EDuration="2m7.628693916s" podCreationTimestamp="2026-01-28 15:17:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:55.625967239 +0000 UTC m=+147.521805561" watchObservedRunningTime="2026-01-28 15:19:55.628693916 +0000 UTC m=+147.524532238" Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.636212 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:55 crc kubenswrapper[4871]: E0128 15:19:55.636447 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:56.136416141 +0000 UTC m=+148.032254473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.636800 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:55 crc kubenswrapper[4871]: E0128 15:19:55.637137 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:56.137124824 +0000 UTC m=+148.032963146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.653531 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-cc8lr" podStartSLOduration=8.653512426 podStartE2EDuration="8.653512426s" podCreationTimestamp="2026-01-28 15:19:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:55.651607836 +0000 UTC m=+147.547446168" watchObservedRunningTime="2026-01-28 15:19:55.653512426 +0000 UTC m=+147.549350748" Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.721972 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" podStartSLOduration=126.721955846 podStartE2EDuration="2m6.721955846s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:55.683201261 +0000 UTC m=+147.579039593" watchObservedRunningTime="2026-01-28 15:19:55.721955846 +0000 UTC m=+147.617794168" Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.722306 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wgnps" podStartSLOduration=126.722302597 podStartE2EDuration="2m6.722302597s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:55.721230353 +0000 UTC m=+147.617068675" watchObservedRunningTime="2026-01-28 15:19:55.722302597 +0000 UTC m=+147.618140919" Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.740182 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:55 crc kubenswrapper[4871]: E0128 15:19:55.740402 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:56.240370232 +0000 UTC m=+148.136208554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.741119 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:55 crc kubenswrapper[4871]: E0128 15:19:55.749146 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:56.249129271 +0000 UTC m=+148.144967593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.761950 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mq867" podStartSLOduration=126.761929909 podStartE2EDuration="2m6.761929909s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:55.760477743 +0000 UTC m=+147.656316065" watchObservedRunningTime="2026-01-28 15:19:55.761929909 +0000 UTC m=+147.657768241" Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.790309 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4rlg" podStartSLOduration=126.790293602 podStartE2EDuration="2m6.790293602s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:55.788565367 +0000 UTC m=+147.684403689" watchObservedRunningTime="2026-01-28 15:19:55.790293602 +0000 UTC m=+147.686131924" Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.848382 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2j6j6" podStartSLOduration=126.848362391 podStartE2EDuration="2m6.848362391s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:55.84201573 +0000 UTC m=+147.737854052" watchObservedRunningTime="2026-01-28 15:19:55.848362391 +0000 UTC m=+147.744200713" Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.853378 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:55 crc kubenswrapper[4871]: E0128 15:19:55.855507 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:56.355468568 +0000 UTC m=+148.251306900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.917773 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cz8j" podStartSLOduration=126.917754982 podStartE2EDuration="2m6.917754982s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:55.886641311 +0000 UTC m=+147.782479633" watchObservedRunningTime="2026-01-28 15:19:55.917754982 +0000 UTC m=+147.813593304" Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.950686 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-tglzn" podStartSLOduration=126.9506665 podStartE2EDuration="2m6.9506665s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:55.946149187 +0000 UTC m=+147.841987509" watchObservedRunningTime="2026-01-28 15:19:55.9506665 +0000 UTC m=+147.846504822" Jan 28 15:19:55 crc kubenswrapper[4871]: I0128 15:19:55.954570 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:55 crc kubenswrapper[4871]: E0128 15:19:55.954877 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:56.454864803 +0000 UTC m=+148.350703125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.038760 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8xpb5" podStartSLOduration=127.038746225 podStartE2EDuration="2m7.038746225s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:56.03825294 +0000 UTC m=+147.934091262" watchObservedRunningTime="2026-01-28 15:19:56.038746225 +0000 UTC m=+147.934584547" Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.039843 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" podStartSLOduration=127.03983846 podStartE2EDuration="2m7.03983846s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:56.019343187 +0000 UTC m=+147.915181509" watchObservedRunningTime="2026-01-28 15:19:56.03983846 +0000 UTC m=+147.935676782" Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.055367 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:56 crc kubenswrapper[4871]: E0128 15:19:56.055571 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:56.555536 +0000 UTC m=+148.451374322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.055775 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:56 crc kubenswrapper[4871]: E0128 15:19:56.056123 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:56.556116308 +0000 UTC m=+148.451954630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.103611 4871 patch_prober.go:28] interesting pod/router-default-5444994796-9q8n5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:19:56 crc kubenswrapper[4871]: [-]has-synced failed: reason withheld Jan 28 15:19:56 crc kubenswrapper[4871]: [+]process-running ok Jan 28 15:19:56 crc kubenswrapper[4871]: healthz check failed Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.103681 4871 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9q8n5" podUID="d48b65d8-de38-4d1f-9162-3f16e3b8401b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.156341 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:56 crc kubenswrapper[4871]: E0128 15:19:56.156633 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:56.656618749 +0000 UTC m=+148.552457071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.257314 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.257529 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:56 crc kubenswrapper[4871]: E0128 15:19:56.257563 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:56.757552064 +0000 UTC m=+148.653390386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.257610 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.257745 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.257775 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.258703 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.263028 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.263231 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.277684 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.358151 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:56 crc kubenswrapper[4871]: E0128 15:19:56.358446 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:56.858401396 +0000 UTC m=+148.754239718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.424198 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.430887 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.459808 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:56 crc kubenswrapper[4871]: E0128 15:19:56.460322 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:56.960303851 +0000 UTC m=+148.856142173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.522053 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.562832 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:56 crc kubenswrapper[4871]: E0128 15:19:56.563395 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:57.063362044 +0000 UTC m=+148.959200366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.563484 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:56 crc kubenswrapper[4871]: E0128 15:19:56.563957 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:57.063947462 +0000 UTC m=+148.959785784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.633555 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-28 15:14:55 +0000 UTC, rotation deadline is 2026-11-11 22:12:56.789608069 +0000 UTC Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.633618 4871 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6894h53m0.155993818s for next certificate rotation Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.652993 4871 patch_prober.go:28] interesting pod/downloads-7954f5f757-zdfpg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.653116 4871 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zdfpg" podUID="3dcb49be-1798-4698-9d3c-39bf78d992e6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.663968 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:56 crc kubenswrapper[4871]: E0128 15:19:56.664892 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:57.164864186 +0000 UTC m=+149.060702518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.666316 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.667849 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wgnps" Jan 28 15:19:56 crc kubenswrapper[4871]: E0128 15:19:56.676906 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:57.17689401 +0000 UTC m=+149.072732332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.773069 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:56 crc kubenswrapper[4871]: E0128 15:19:56.773439 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:57.273424884 +0000 UTC m=+149.169263206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.827983 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.876538 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:56 crc kubenswrapper[4871]: E0128 15:19:56.876965 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:57.376947171 +0000 UTC m=+149.272785553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:56 crc kubenswrapper[4871]: W0128 15:19:56.962121 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-23f9a56b61f033b9f5767fffd5fc73a6043cd4a0d5d82c14a2354e90b9bd1382 WatchSource:0}: Error finding container 23f9a56b61f033b9f5767fffd5fc73a6043cd4a0d5d82c14a2354e90b9bd1382: Status 404 returned error can't find the container with id 23f9a56b61f033b9f5767fffd5fc73a6043cd4a0d5d82c14a2354e90b9bd1382 Jan 28 15:19:56 crc kubenswrapper[4871]: I0128 15:19:56.977495 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:56 crc kubenswrapper[4871]: E0128 15:19:56.978390 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:57.478357461 +0000 UTC m=+149.374195783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.081726 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:57 crc kubenswrapper[4871]: E0128 15:19:57.082268 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:57.58225089 +0000 UTC m=+149.478089202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.111856 4871 patch_prober.go:28] interesting pod/router-default-5444994796-9q8n5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:19:57 crc kubenswrapper[4871]: [-]has-synced failed: reason withheld Jan 28 15:19:57 crc kubenswrapper[4871]: [+]process-running ok Jan 28 15:19:57 crc kubenswrapper[4871]: healthz check failed Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.111937 4871 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9q8n5" podUID="d48b65d8-de38-4d1f-9162-3f16e3b8401b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.184328 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:57 crc kubenswrapper[4871]: E0128 15:19:57.184676 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:57.684660991 +0000 UTC m=+149.580499313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.286454 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:57 crc kubenswrapper[4871]: E0128 15:19:57.290263 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:57.790231793 +0000 UTC m=+149.686070115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.335237 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cz8j" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.389786 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:57 crc kubenswrapper[4871]: E0128 15:19:57.391123 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:57.891092465 +0000 UTC m=+149.786930777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.437849 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g7btj"] Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.439059 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g7btj"] Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.439259 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g7btj" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.445691 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.491313 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:57 crc kubenswrapper[4871]: E0128 15:19:57.491994 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:57.991981169 +0000 UTC m=+149.887819481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.492803 4871 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.595979 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.596174 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b83a5ec-9ed6-4e66-9a39-610a39f64d19-utilities\") pod \"community-operators-g7btj\" (UID: \"9b83a5ec-9ed6-4e66-9a39-610a39f64d19\") " pod="openshift-marketplace/community-operators-g7btj" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.596203 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfvtx\" (UniqueName: \"kubernetes.io/projected/9b83a5ec-9ed6-4e66-9a39-610a39f64d19-kube-api-access-tfvtx\") pod \"community-operators-g7btj\" (UID: \"9b83a5ec-9ed6-4e66-9a39-610a39f64d19\") " pod="openshift-marketplace/community-operators-g7btj" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.596246 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b83a5ec-9ed6-4e66-9a39-610a39f64d19-catalog-content\") pod \"community-operators-g7btj\" (UID: \"9b83a5ec-9ed6-4e66-9a39-610a39f64d19\") " pod="openshift-marketplace/community-operators-g7btj" Jan 28 15:19:57 crc kubenswrapper[4871]: E0128 15:19:57.596335 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:58.096321112 +0000 UTC m=+149.992159434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.599291 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mjks9"] Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.600321 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mjks9" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.604906 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.617777 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mjks9"] Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.673049 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"435a7c4d8d8195195a9487af554f79f350203b9eaf32fdf8c003320a4f4545c8"} Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.673108 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"23f9a56b61f033b9f5767fffd5fc73a6043cd4a0d5d82c14a2354e90b9bd1382"} Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.674065 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.692046 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d08e8303b150cf8fdbafeb8f495ed3e61d26cdf5da50863bd42b7be4f8ad301a"} Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.692096 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"694397207a0e6b2ae672ff2a2e73d154d9815853972b6ebb04961a33782ad52b"} Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.695502 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" event={"ID":"8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2","Type":"ContainerStarted","Data":"020cf3de75ad455d34cba233f85dc1d6af3a5ad24fb39b8d89c4e85cb978760f"} Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.695541 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" event={"ID":"8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2","Type":"ContainerStarted","Data":"5c7d6fe89c4a47a45044681112f75871185ecc8f32408a482adcc902a3ff511a"} Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.698497 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b83a5ec-9ed6-4e66-9a39-610a39f64d19-utilities\") pod \"community-operators-g7btj\" (UID: \"9b83a5ec-9ed6-4e66-9a39-610a39f64d19\") " pod="openshift-marketplace/community-operators-g7btj" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.698548 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfvtx\" (UniqueName: \"kubernetes.io/projected/9b83a5ec-9ed6-4e66-9a39-610a39f64d19-kube-api-access-tfvtx\") pod \"community-operators-g7btj\" (UID: \"9b83a5ec-9ed6-4e66-9a39-610a39f64d19\") " pod="openshift-marketplace/community-operators-g7btj" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.698598 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e209b4f2-aaea-496b-8e14-58f2aa8faaa5-utilities\") pod \"certified-operators-mjks9\" (UID: \"e209b4f2-aaea-496b-8e14-58f2aa8faaa5\") " pod="openshift-marketplace/certified-operators-mjks9" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.698655 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.698695 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b83a5ec-9ed6-4e66-9a39-610a39f64d19-catalog-content\") pod \"community-operators-g7btj\" (UID: \"9b83a5ec-9ed6-4e66-9a39-610a39f64d19\") " pod="openshift-marketplace/community-operators-g7btj" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.698745 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e209b4f2-aaea-496b-8e14-58f2aa8faaa5-catalog-content\") pod \"certified-operators-mjks9\" (UID: \"e209b4f2-aaea-496b-8e14-58f2aa8faaa5\") " pod="openshift-marketplace/certified-operators-mjks9" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.698774 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qsb8\" (UniqueName: \"kubernetes.io/projected/e209b4f2-aaea-496b-8e14-58f2aa8faaa5-kube-api-access-4qsb8\") pod \"certified-operators-mjks9\" (UID: \"e209b4f2-aaea-496b-8e14-58f2aa8faaa5\") " pod="openshift-marketplace/certified-operators-mjks9" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.699267 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b83a5ec-9ed6-4e66-9a39-610a39f64d19-utilities\") pod \"community-operators-g7btj\" (UID: \"9b83a5ec-9ed6-4e66-9a39-610a39f64d19\") " pod="openshift-marketplace/community-operators-g7btj" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.701070 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b83a5ec-9ed6-4e66-9a39-610a39f64d19-catalog-content\") pod \"community-operators-g7btj\" (UID: \"9b83a5ec-9ed6-4e66-9a39-610a39f64d19\") " pod="openshift-marketplace/community-operators-g7btj" Jan 28 15:19:57 crc kubenswrapper[4871]: E0128 15:19:57.702655 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:58.202639438 +0000 UTC m=+150.098477760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.706876 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f68259e4703bca3a737f805da46faa75b7639edafec4766d1b12c93ce45625ce"} Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.706918 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e5e6be529d59b740357ddd6ad5f63c5e07d50cfadc55616a45821241a239b0e2"} Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.728874 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfvtx\" (UniqueName: \"kubernetes.io/projected/9b83a5ec-9ed6-4e66-9a39-610a39f64d19-kube-api-access-tfvtx\") pod \"community-operators-g7btj\" (UID: \"9b83a5ec-9ed6-4e66-9a39-610a39f64d19\") " pod="openshift-marketplace/community-operators-g7btj" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.742017 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vkb2v" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.789659 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g7btj" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.802773 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:57 crc kubenswrapper[4871]: E0128 15:19:57.802984 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:58.302957313 +0000 UTC m=+150.198795635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.803058 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e209b4f2-aaea-496b-8e14-58f2aa8faaa5-catalog-content\") pod \"certified-operators-mjks9\" (UID: \"e209b4f2-aaea-496b-8e14-58f2aa8faaa5\") " pod="openshift-marketplace/certified-operators-mjks9" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.803114 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qsb8\" (UniqueName: \"kubernetes.io/projected/e209b4f2-aaea-496b-8e14-58f2aa8faaa5-kube-api-access-4qsb8\") pod \"certified-operators-mjks9\" (UID: \"e209b4f2-aaea-496b-8e14-58f2aa8faaa5\") " pod="openshift-marketplace/certified-operators-mjks9" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.803265 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e209b4f2-aaea-496b-8e14-58f2aa8faaa5-utilities\") pod \"certified-operators-mjks9\" (UID: \"e209b4f2-aaea-496b-8e14-58f2aa8faaa5\") " pod="openshift-marketplace/certified-operators-mjks9" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.803403 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.804315 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e209b4f2-aaea-496b-8e14-58f2aa8faaa5-catalog-content\") pod \"certified-operators-mjks9\" (UID: \"e209b4f2-aaea-496b-8e14-58f2aa8faaa5\") " pod="openshift-marketplace/certified-operators-mjks9" Jan 28 15:19:57 crc kubenswrapper[4871]: E0128 15:19:57.804734 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:58.30471912 +0000 UTC m=+150.200557442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.804891 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e209b4f2-aaea-496b-8e14-58f2aa8faaa5-utilities\") pod \"certified-operators-mjks9\" (UID: \"e209b4f2-aaea-496b-8e14-58f2aa8faaa5\") " pod="openshift-marketplace/certified-operators-mjks9" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.813977 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pxfdt"] Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.839654 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qsb8\" (UniqueName: \"kubernetes.io/projected/e209b4f2-aaea-496b-8e14-58f2aa8faaa5-kube-api-access-4qsb8\") pod \"certified-operators-mjks9\" (UID: \"e209b4f2-aaea-496b-8e14-58f2aa8faaa5\") " pod="openshift-marketplace/certified-operators-mjks9" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.847605 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pxfdt"] Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.847721 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxfdt" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.903954 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.904251 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67d8b5ae-67a3-4ca7-b74e-35e5cc45d519-utilities\") pod \"community-operators-pxfdt\" (UID: \"67d8b5ae-67a3-4ca7-b74e-35e5cc45d519\") " pod="openshift-marketplace/community-operators-pxfdt" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.904321 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67d8b5ae-67a3-4ca7-b74e-35e5cc45d519-catalog-content\") pod \"community-operators-pxfdt\" (UID: \"67d8b5ae-67a3-4ca7-b74e-35e5cc45d519\") " pod="openshift-marketplace/community-operators-pxfdt" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.904365 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28fjt\" (UniqueName: \"kubernetes.io/projected/67d8b5ae-67a3-4ca7-b74e-35e5cc45d519-kube-api-access-28fjt\") pod \"community-operators-pxfdt\" (UID: \"67d8b5ae-67a3-4ca7-b74e-35e5cc45d519\") " pod="openshift-marketplace/community-operators-pxfdt" Jan 28 15:19:57 crc kubenswrapper[4871]: E0128 15:19:57.904502 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:58.404482817 +0000 UTC m=+150.300321139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.917804 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mjks9" Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.985001 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ldxz9"] Jan 28 15:19:57 crc kubenswrapper[4871]: I0128 15:19:57.987103 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ldxz9" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.001466 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ldxz9"] Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.005100 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67d8b5ae-67a3-4ca7-b74e-35e5cc45d519-catalog-content\") pod \"community-operators-pxfdt\" (UID: \"67d8b5ae-67a3-4ca7-b74e-35e5cc45d519\") " pod="openshift-marketplace/community-operators-pxfdt" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.005188 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.005234 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7296424-ed41-429c-8e45-599795442f1d-catalog-content\") pod \"certified-operators-ldxz9\" (UID: \"f7296424-ed41-429c-8e45-599795442f1d\") " pod="openshift-marketplace/certified-operators-ldxz9" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.005256 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28fjt\" (UniqueName: \"kubernetes.io/projected/67d8b5ae-67a3-4ca7-b74e-35e5cc45d519-kube-api-access-28fjt\") pod \"community-operators-pxfdt\" (UID: \"67d8b5ae-67a3-4ca7-b74e-35e5cc45d519\") " pod="openshift-marketplace/community-operators-pxfdt" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.005291 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkdlw\" (UniqueName: \"kubernetes.io/projected/f7296424-ed41-429c-8e45-599795442f1d-kube-api-access-lkdlw\") pod \"certified-operators-ldxz9\" (UID: \"f7296424-ed41-429c-8e45-599795442f1d\") " pod="openshift-marketplace/certified-operators-ldxz9" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.005333 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7296424-ed41-429c-8e45-599795442f1d-utilities\") pod \"certified-operators-ldxz9\" (UID: \"f7296424-ed41-429c-8e45-599795442f1d\") " pod="openshift-marketplace/certified-operators-ldxz9" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.005353 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67d8b5ae-67a3-4ca7-b74e-35e5cc45d519-utilities\") pod \"community-operators-pxfdt\" (UID: \"67d8b5ae-67a3-4ca7-b74e-35e5cc45d519\") " pod="openshift-marketplace/community-operators-pxfdt" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.005719 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67d8b5ae-67a3-4ca7-b74e-35e5cc45d519-catalog-content\") pod \"community-operators-pxfdt\" (UID: \"67d8b5ae-67a3-4ca7-b74e-35e5cc45d519\") " pod="openshift-marketplace/community-operators-pxfdt" Jan 28 15:19:58 crc kubenswrapper[4871]: E0128 15:19:58.005982 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:19:58.505969219 +0000 UTC m=+150.401807541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vprhz" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.006192 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67d8b5ae-67a3-4ca7-b74e-35e5cc45d519-utilities\") pod \"community-operators-pxfdt\" (UID: \"67d8b5ae-67a3-4ca7-b74e-35e5cc45d519\") " pod="openshift-marketplace/community-operators-pxfdt" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.040501 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28fjt\" (UniqueName: \"kubernetes.io/projected/67d8b5ae-67a3-4ca7-b74e-35e5cc45d519-kube-api-access-28fjt\") pod \"community-operators-pxfdt\" (UID: \"67d8b5ae-67a3-4ca7-b74e-35e5cc45d519\") " pod="openshift-marketplace/community-operators-pxfdt" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.063303 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.064537 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.071016 4871 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-28T15:19:57.492831156Z","Handler":null,"Name":""} Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.080053 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.081429 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.106443 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.107118 4871 patch_prober.go:28] interesting pod/router-default-5444994796-9q8n5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:19:58 crc kubenswrapper[4871]: [-]has-synced failed: reason withheld Jan 28 15:19:58 crc kubenswrapper[4871]: [+]process-running ok Jan 28 15:19:58 crc kubenswrapper[4871]: healthz check failed Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.107199 4871 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9q8n5" podUID="d48b65d8-de38-4d1f-9162-3f16e3b8401b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.107209 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7296424-ed41-429c-8e45-599795442f1d-utilities\") pod \"certified-operators-ldxz9\" (UID: \"f7296424-ed41-429c-8e45-599795442f1d\") " pod="openshift-marketplace/certified-operators-ldxz9" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.107318 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7296424-ed41-429c-8e45-599795442f1d-catalog-content\") pod \"certified-operators-ldxz9\" (UID: \"f7296424-ed41-429c-8e45-599795442f1d\") " pod="openshift-marketplace/certified-operators-ldxz9" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.107342 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/add5ed23-8690-460e-ad71-166ae220ce1d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"add5ed23-8690-460e-ad71-166ae220ce1d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.107386 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkdlw\" (UniqueName: \"kubernetes.io/projected/f7296424-ed41-429c-8e45-599795442f1d-kube-api-access-lkdlw\") pod \"certified-operators-ldxz9\" (UID: \"f7296424-ed41-429c-8e45-599795442f1d\") " pod="openshift-marketplace/certified-operators-ldxz9" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.107413 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/add5ed23-8690-460e-ad71-166ae220ce1d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"add5ed23-8690-460e-ad71-166ae220ce1d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:19:58 crc kubenswrapper[4871]: E0128 15:19:58.107659 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:58.607637816 +0000 UTC m=+150.503476128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.107921 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7296424-ed41-429c-8e45-599795442f1d-utilities\") pod \"certified-operators-ldxz9\" (UID: \"f7296424-ed41-429c-8e45-599795442f1d\") " pod="openshift-marketplace/certified-operators-ldxz9" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.108023 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7296424-ed41-429c-8e45-599795442f1d-catalog-content\") pod \"certified-operators-ldxz9\" (UID: \"f7296424-ed41-429c-8e45-599795442f1d\") " pod="openshift-marketplace/certified-operators-ldxz9" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.131155 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.132555 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkdlw\" (UniqueName: \"kubernetes.io/projected/f7296424-ed41-429c-8e45-599795442f1d-kube-api-access-lkdlw\") pod \"certified-operators-ldxz9\" (UID: \"f7296424-ed41-429c-8e45-599795442f1d\") " pod="openshift-marketplace/certified-operators-ldxz9" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.148267 4871 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.148305 4871 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.165682 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.203978 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxfdt" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.209727 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/add5ed23-8690-460e-ad71-166ae220ce1d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"add5ed23-8690-460e-ad71-166ae220ce1d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.209898 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.209927 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/add5ed23-8690-460e-ad71-166ae220ce1d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"add5ed23-8690-460e-ad71-166ae220ce1d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.211187 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/add5ed23-8690-460e-ad71-166ae220ce1d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"add5ed23-8690-460e-ad71-166ae220ce1d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.214853 4871 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.215077 4871 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.248794 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/add5ed23-8690-460e-ad71-166ae220ce1d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"add5ed23-8690-460e-ad71-166ae220ce1d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.298985 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vprhz\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.316924 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.317359 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ldxz9" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.342608 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.366329 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mjks9"] Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.395885 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.427383 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g7btj"] Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.503386 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.516126 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pxfdt"] Jan 28 15:19:58 crc kubenswrapper[4871]: W0128 15:19:58.551775 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67d8b5ae_67a3_4ca7_b74e_35e5cc45d519.slice/crio-7d47860d750dc6ebf0328f70cd54381dfbab107c657b6bb131991ff6ba0a0928 WatchSource:0}: Error finding container 7d47860d750dc6ebf0328f70cd54381dfbab107c657b6bb131991ff6ba0a0928: Status 404 returned error can't find the container with id 7d47860d750dc6ebf0328f70cd54381dfbab107c657b6bb131991ff6ba0a0928 Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.631144 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ldxz9"] Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.724650 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxfdt" event={"ID":"67d8b5ae-67a3-4ca7-b74e-35e5cc45d519","Type":"ContainerStarted","Data":"7d47860d750dc6ebf0328f70cd54381dfbab107c657b6bb131991ff6ba0a0928"} Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.727984 4871 generic.go:334] "Generic (PLEG): container finished" podID="e209b4f2-aaea-496b-8e14-58f2aa8faaa5" containerID="eef1f938017d3cac965244bb04470f3fb5e2808053a7c55e66d9f563830bff8f" exitCode=0 Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.728058 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mjks9" event={"ID":"e209b4f2-aaea-496b-8e14-58f2aa8faaa5","Type":"ContainerDied","Data":"eef1f938017d3cac965244bb04470f3fb5e2808053a7c55e66d9f563830bff8f"} Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.728109 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mjks9" event={"ID":"e209b4f2-aaea-496b-8e14-58f2aa8faaa5","Type":"ContainerStarted","Data":"c41f98c5d9a96e2501dc17456036876b945f30296717fe2a294c5c87d4c76735"} Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.732524 4871 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.747467 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldxz9" event={"ID":"f7296424-ed41-429c-8e45-599795442f1d","Type":"ContainerStarted","Data":"72477bb9774747e694692621ecbeabc2d01332c7256d4f633de28d66a28c4bb6"} Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.750075 4871 generic.go:334] "Generic (PLEG): container finished" podID="9b83a5ec-9ed6-4e66-9a39-610a39f64d19" containerID="a5ee91f3edce66441308abbfcfa3536bf1f8ddfc969956dacd322133350d6e52" exitCode=0 Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.750171 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7btj" event={"ID":"9b83a5ec-9ed6-4e66-9a39-610a39f64d19","Type":"ContainerDied","Data":"a5ee91f3edce66441308abbfcfa3536bf1f8ddfc969956dacd322133350d6e52"} Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.750203 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7btj" event={"ID":"9b83a5ec-9ed6-4e66-9a39-610a39f64d19","Type":"ContainerStarted","Data":"5926a38b3856a353d17e05aa3104096bd6083f0c25411f14dc5494d2a3f9a5c1"} Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.769571 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" event={"ID":"8bf07a92-00a1-4c1c-ab4d-d40286c2d7a2","Type":"ContainerStarted","Data":"cdd194bffee396320c26ad2571b44c7ec42ebca8ed36746dacb8d366dc32bc49"} Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.783233 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.817843 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-fvzpr" podStartSLOduration=11.817817885 podStartE2EDuration="11.817817885s" podCreationTimestamp="2026-01-28 15:19:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:58.806328419 +0000 UTC m=+150.702166741" watchObservedRunningTime="2026-01-28 15:19:58.817817885 +0000 UTC m=+150.713656207" Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.862761 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vprhz"] Jan 28 15:19:58 crc kubenswrapper[4871]: W0128 15:19:58.877830 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee25a7d4_5043_48d7_91d1_68f2af96109a.slice/crio-17ddab776518c0d7f89a99f535010a783d9f356ea56b08117c1668e7ea6b37e9 WatchSource:0}: Error finding container 17ddab776518c0d7f89a99f535010a783d9f356ea56b08117c1668e7ea6b37e9: Status 404 returned error can't find the container with id 17ddab776518c0d7f89a99f535010a783d9f356ea56b08117c1668e7ea6b37e9 Jan 28 15:19:58 crc kubenswrapper[4871]: I0128 15:19:58.927015 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.005936 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.006643 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.012557 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.012683 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.033977 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.036346 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81a8ea88-b9fc-4ae8-8240-8b0320045b7e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"81a8ea88-b9fc-4ae8-8240-8b0320045b7e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.036473 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81a8ea88-b9fc-4ae8-8240-8b0320045b7e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"81a8ea88-b9fc-4ae8-8240-8b0320045b7e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.100094 4871 patch_prober.go:28] interesting pod/router-default-5444994796-9q8n5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:19:59 crc kubenswrapper[4871]: [-]has-synced failed: reason withheld Jan 28 15:19:59 crc kubenswrapper[4871]: [+]process-running ok Jan 28 15:19:59 crc kubenswrapper[4871]: healthz check failed Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.100344 4871 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9q8n5" podUID="d48b65d8-de38-4d1f-9162-3f16e3b8401b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.140112 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81a8ea88-b9fc-4ae8-8240-8b0320045b7e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"81a8ea88-b9fc-4ae8-8240-8b0320045b7e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.140665 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81a8ea88-b9fc-4ae8-8240-8b0320045b7e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"81a8ea88-b9fc-4ae8-8240-8b0320045b7e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.140318 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81a8ea88-b9fc-4ae8-8240-8b0320045b7e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"81a8ea88-b9fc-4ae8-8240-8b0320045b7e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.161706 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81a8ea88-b9fc-4ae8-8240-8b0320045b7e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"81a8ea88-b9fc-4ae8-8240-8b0320045b7e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.324549 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.398072 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j9dhs"] Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.399427 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j9dhs" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.401255 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.414538 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9dhs"] Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.446223 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6575ba24-dfe2-4f55-96ee-6692928debdd-utilities\") pod \"redhat-marketplace-j9dhs\" (UID: \"6575ba24-dfe2-4f55-96ee-6692928debdd\") " pod="openshift-marketplace/redhat-marketplace-j9dhs" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.446332 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm2hd\" (UniqueName: \"kubernetes.io/projected/6575ba24-dfe2-4f55-96ee-6692928debdd-kube-api-access-xm2hd\") pod \"redhat-marketplace-j9dhs\" (UID: \"6575ba24-dfe2-4f55-96ee-6692928debdd\") " pod="openshift-marketplace/redhat-marketplace-j9dhs" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.446376 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6575ba24-dfe2-4f55-96ee-6692928debdd-catalog-content\") pod \"redhat-marketplace-j9dhs\" (UID: \"6575ba24-dfe2-4f55-96ee-6692928debdd\") " pod="openshift-marketplace/redhat-marketplace-j9dhs" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.548306 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm2hd\" (UniqueName: \"kubernetes.io/projected/6575ba24-dfe2-4f55-96ee-6692928debdd-kube-api-access-xm2hd\") pod \"redhat-marketplace-j9dhs\" (UID: \"6575ba24-dfe2-4f55-96ee-6692928debdd\") " pod="openshift-marketplace/redhat-marketplace-j9dhs" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.548756 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6575ba24-dfe2-4f55-96ee-6692928debdd-catalog-content\") pod \"redhat-marketplace-j9dhs\" (UID: \"6575ba24-dfe2-4f55-96ee-6692928debdd\") " pod="openshift-marketplace/redhat-marketplace-j9dhs" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.548816 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6575ba24-dfe2-4f55-96ee-6692928debdd-utilities\") pod \"redhat-marketplace-j9dhs\" (UID: \"6575ba24-dfe2-4f55-96ee-6692928debdd\") " pod="openshift-marketplace/redhat-marketplace-j9dhs" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.549373 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6575ba24-dfe2-4f55-96ee-6692928debdd-catalog-content\") pod \"redhat-marketplace-j9dhs\" (UID: \"6575ba24-dfe2-4f55-96ee-6692928debdd\") " pod="openshift-marketplace/redhat-marketplace-j9dhs" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.549406 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6575ba24-dfe2-4f55-96ee-6692928debdd-utilities\") pod \"redhat-marketplace-j9dhs\" (UID: \"6575ba24-dfe2-4f55-96ee-6692928debdd\") " pod="openshift-marketplace/redhat-marketplace-j9dhs" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.587232 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm2hd\" (UniqueName: \"kubernetes.io/projected/6575ba24-dfe2-4f55-96ee-6692928debdd-kube-api-access-xm2hd\") pod \"redhat-marketplace-j9dhs\" (UID: \"6575ba24-dfe2-4f55-96ee-6692928debdd\") " pod="openshift-marketplace/redhat-marketplace-j9dhs" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.663162 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.718403 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j9dhs" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.796579 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"add5ed23-8690-460e-ad71-166ae220ce1d","Type":"ContainerStarted","Data":"858a88d12ade42e72042019ba685c46e844c1d81a7e16a9962ce584e2cbe2ade"} Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.796636 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"add5ed23-8690-460e-ad71-166ae220ce1d","Type":"ContainerStarted","Data":"b48fe2c01af3734c960c778790df5d997e4d6555e4b4a4a360b86fbcd32a98eb"} Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.812710 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7rpqt"] Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.813727 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7rpqt" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.829070 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=1.8290569319999999 podStartE2EDuration="1.829056932s" podCreationTimestamp="2026-01-28 15:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:59.827579085 +0000 UTC m=+151.723417407" watchObservedRunningTime="2026-01-28 15:19:59.829056932 +0000 UTC m=+151.724895254" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.836612 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7rpqt"] Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.837564 4871 generic.go:334] "Generic (PLEG): container finished" podID="f7296424-ed41-429c-8e45-599795442f1d" containerID="3d2b68b5505f01ededb49f4326f49a54d4bdb1d27eaf1038fee7d72f051076cb" exitCode=0 Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.837677 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldxz9" event={"ID":"f7296424-ed41-429c-8e45-599795442f1d","Type":"ContainerDied","Data":"3d2b68b5505f01ededb49f4326f49a54d4bdb1d27eaf1038fee7d72f051076cb"} Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.842536 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" event={"ID":"ee25a7d4-5043-48d7-91d1-68f2af96109a","Type":"ContainerStarted","Data":"302a76e553c56910bd5e7c282d2a3bb25956e6b787536255e09cdd28263a5967"} Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.842665 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" event={"ID":"ee25a7d4-5043-48d7-91d1-68f2af96109a","Type":"ContainerStarted","Data":"17ddab776518c0d7f89a99f535010a783d9f356ea56b08117c1668e7ea6b37e9"} Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.842733 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.852805 4871 generic.go:334] "Generic (PLEG): container finished" podID="67d8b5ae-67a3-4ca7-b74e-35e5cc45d519" containerID="4e79bc6ab1e19842e5d4f9023f17a5ea72d9012b2a0288747ee09d4ed01afb74" exitCode=0 Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.852876 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxfdt" event={"ID":"67d8b5ae-67a3-4ca7-b74e-35e5cc45d519","Type":"ContainerDied","Data":"4e79bc6ab1e19842e5d4f9023f17a5ea72d9012b2a0288747ee09d4ed01afb74"} Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.858259 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"81a8ea88-b9fc-4ae8-8240-8b0320045b7e","Type":"ContainerStarted","Data":"2e8309e43070d44041803a4b75de55c53abe268a91f23f9f0dd854192afdb7f8"} Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.860044 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/857f6040-72fa-4f10-97e4-2f4e4e2198a4-catalog-content\") pod \"redhat-marketplace-7rpqt\" (UID: \"857f6040-72fa-4f10-97e4-2f4e4e2198a4\") " pod="openshift-marketplace/redhat-marketplace-7rpqt" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.860126 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/857f6040-72fa-4f10-97e4-2f4e4e2198a4-utilities\") pod \"redhat-marketplace-7rpqt\" (UID: \"857f6040-72fa-4f10-97e4-2f4e4e2198a4\") " pod="openshift-marketplace/redhat-marketplace-7rpqt" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.860305 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpwh2\" (UniqueName: \"kubernetes.io/projected/857f6040-72fa-4f10-97e4-2f4e4e2198a4-kube-api-access-cpwh2\") pod \"redhat-marketplace-7rpqt\" (UID: \"857f6040-72fa-4f10-97e4-2f4e4e2198a4\") " pod="openshift-marketplace/redhat-marketplace-7rpqt" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.901551 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" podStartSLOduration=130.90153547 podStartE2EDuration="2m10.90153547s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:59.897509762 +0000 UTC m=+151.793348084" watchObservedRunningTime="2026-01-28 15:19:59.90153547 +0000 UTC m=+151.797373782" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.966514 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/857f6040-72fa-4f10-97e4-2f4e4e2198a4-utilities\") pod \"redhat-marketplace-7rpqt\" (UID: \"857f6040-72fa-4f10-97e4-2f4e4e2198a4\") " pod="openshift-marketplace/redhat-marketplace-7rpqt" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.966623 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpwh2\" (UniqueName: \"kubernetes.io/projected/857f6040-72fa-4f10-97e4-2f4e4e2198a4-kube-api-access-cpwh2\") pod \"redhat-marketplace-7rpqt\" (UID: \"857f6040-72fa-4f10-97e4-2f4e4e2198a4\") " pod="openshift-marketplace/redhat-marketplace-7rpqt" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.966676 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/857f6040-72fa-4f10-97e4-2f4e4e2198a4-catalog-content\") pod \"redhat-marketplace-7rpqt\" (UID: \"857f6040-72fa-4f10-97e4-2f4e4e2198a4\") " pod="openshift-marketplace/redhat-marketplace-7rpqt" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.967132 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/857f6040-72fa-4f10-97e4-2f4e4e2198a4-catalog-content\") pod \"redhat-marketplace-7rpqt\" (UID: \"857f6040-72fa-4f10-97e4-2f4e4e2198a4\") " pod="openshift-marketplace/redhat-marketplace-7rpqt" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.967222 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/857f6040-72fa-4f10-97e4-2f4e4e2198a4-utilities\") pod \"redhat-marketplace-7rpqt\" (UID: \"857f6040-72fa-4f10-97e4-2f4e4e2198a4\") " pod="openshift-marketplace/redhat-marketplace-7rpqt" Jan 28 15:19:59 crc kubenswrapper[4871]: I0128 15:19:59.993139 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpwh2\" (UniqueName: \"kubernetes.io/projected/857f6040-72fa-4f10-97e4-2f4e4e2198a4-kube-api-access-cpwh2\") pod \"redhat-marketplace-7rpqt\" (UID: \"857f6040-72fa-4f10-97e4-2f4e4e2198a4\") " pod="openshift-marketplace/redhat-marketplace-7rpqt" Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.098434 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-9q8n5" Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.101833 4871 patch_prober.go:28] interesting pod/router-default-5444994796-9q8n5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:20:00 crc kubenswrapper[4871]: [-]has-synced failed: reason withheld Jan 28 15:20:00 crc kubenswrapper[4871]: [+]process-running ok Jan 28 15:20:00 crc kubenswrapper[4871]: healthz check failed Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.101869 4871 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9q8n5" podUID="d48b65d8-de38-4d1f-9162-3f16e3b8401b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.120406 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9dhs"] Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.149654 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7rpqt" Jan 28 15:20:00 crc kubenswrapper[4871]: W0128 15:20:00.157809 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6575ba24_dfe2_4f55_96ee_6692928debdd.slice/crio-6f19e3ff60312f2f7e9be3411b1b02006cc948608b6f1adc92e6222513a6f127 WatchSource:0}: Error finding container 6f19e3ff60312f2f7e9be3411b1b02006cc948608b6f1adc92e6222513a6f127: Status 404 returned error can't find the container with id 6f19e3ff60312f2f7e9be3411b1b02006cc948608b6f1adc92e6222513a6f127 Jan 28 15:20:00 crc kubenswrapper[4871]: E0128 15:20:00.164988 4871 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podadd5ed23_8690_460e_ad71_166ae220ce1d.slice/crio-conmon-858a88d12ade42e72042019ba685c46e844c1d81a7e16a9962ce584e2cbe2ade.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podadd5ed23_8690_460e_ad71_166ae220ce1d.slice/crio-858a88d12ade42e72042019ba685c46e844c1d81a7e16a9962ce584e2cbe2ade.scope\": RecentStats: unable to find data in memory cache]" Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.295758 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.296374 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.361914 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.478216 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.478789 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.480121 4871 patch_prober.go:28] interesting pod/console-f9d7485db-j85fr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.480203 4871 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-j85fr" podUID="2571452b-5b45-43d1-bd39-35ef29c4fe80" containerName="console" probeResult="failure" output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.492509 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7rpqt"] Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.589074 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xdhpp"] Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.591329 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xdhpp" Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.602465 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xdhpp"] Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.602689 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.676546 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcrhx\" (UniqueName: \"kubernetes.io/projected/9524a883-d083-471d-8c30-866172b8456e-kube-api-access-fcrhx\") pod \"redhat-operators-xdhpp\" (UID: \"9524a883-d083-471d-8c30-866172b8456e\") " pod="openshift-marketplace/redhat-operators-xdhpp" Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.676778 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9524a883-d083-471d-8c30-866172b8456e-catalog-content\") pod \"redhat-operators-xdhpp\" (UID: \"9524a883-d083-471d-8c30-866172b8456e\") " pod="openshift-marketplace/redhat-operators-xdhpp" Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.676804 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9524a883-d083-471d-8c30-866172b8456e-utilities\") pod \"redhat-operators-xdhpp\" (UID: \"9524a883-d083-471d-8c30-866172b8456e\") " pod="openshift-marketplace/redhat-operators-xdhpp" Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.767360 4871 patch_prober.go:28] interesting pod/downloads-7954f5f757-zdfpg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.767418 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zdfpg" podUID="3dcb49be-1798-4698-9d3c-39bf78d992e6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.767715 4871 patch_prober.go:28] interesting pod/downloads-7954f5f757-zdfpg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.767738 4871 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zdfpg" podUID="3dcb49be-1798-4698-9d3c-39bf78d992e6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.777920 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9524a883-d083-471d-8c30-866172b8456e-catalog-content\") pod \"redhat-operators-xdhpp\" (UID: \"9524a883-d083-471d-8c30-866172b8456e\") " pod="openshift-marketplace/redhat-operators-xdhpp" Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.777971 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9524a883-d083-471d-8c30-866172b8456e-utilities\") pod \"redhat-operators-xdhpp\" (UID: \"9524a883-d083-471d-8c30-866172b8456e\") " pod="openshift-marketplace/redhat-operators-xdhpp" Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.778000 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcrhx\" (UniqueName: \"kubernetes.io/projected/9524a883-d083-471d-8c30-866172b8456e-kube-api-access-fcrhx\") pod \"redhat-operators-xdhpp\" (UID: \"9524a883-d083-471d-8c30-866172b8456e\") " pod="openshift-marketplace/redhat-operators-xdhpp" Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.778571 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9524a883-d083-471d-8c30-866172b8456e-utilities\") pod \"redhat-operators-xdhpp\" (UID: \"9524a883-d083-471d-8c30-866172b8456e\") " pod="openshift-marketplace/redhat-operators-xdhpp" Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.778894 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9524a883-d083-471d-8c30-866172b8456e-catalog-content\") pod \"redhat-operators-xdhpp\" (UID: \"9524a883-d083-471d-8c30-866172b8456e\") " pod="openshift-marketplace/redhat-operators-xdhpp" Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.802643 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcrhx\" (UniqueName: \"kubernetes.io/projected/9524a883-d083-471d-8c30-866172b8456e-kube-api-access-fcrhx\") pod \"redhat-operators-xdhpp\" (UID: \"9524a883-d083-471d-8c30-866172b8456e\") " pod="openshift-marketplace/redhat-operators-xdhpp" Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.888442 4871 generic.go:334] "Generic (PLEG): container finished" podID="857f6040-72fa-4f10-97e4-2f4e4e2198a4" containerID="808893a25855ee26304717f1dcc08f26507c97b1dc0edcdae86a401c5ba8b067" exitCode=0 Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.888501 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7rpqt" event={"ID":"857f6040-72fa-4f10-97e4-2f4e4e2198a4","Type":"ContainerDied","Data":"808893a25855ee26304717f1dcc08f26507c97b1dc0edcdae86a401c5ba8b067"} Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.888528 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7rpqt" event={"ID":"857f6040-72fa-4f10-97e4-2f4e4e2198a4","Type":"ContainerStarted","Data":"004e6c1e2e3778dee5aa170d6418a4ad6fe7e684231e186db973cf6ae400835d"} Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.896117 4871 generic.go:334] "Generic (PLEG): container finished" podID="6575ba24-dfe2-4f55-96ee-6692928debdd" containerID="fd5f75951cc8b49e032414221c980a6877a544a2552ed879ff69bdfd7941ef18" exitCode=0 Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.896171 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9dhs" event={"ID":"6575ba24-dfe2-4f55-96ee-6692928debdd","Type":"ContainerDied","Data":"fd5f75951cc8b49e032414221c980a6877a544a2552ed879ff69bdfd7941ef18"} Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.896252 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9dhs" event={"ID":"6575ba24-dfe2-4f55-96ee-6692928debdd","Type":"ContainerStarted","Data":"6f19e3ff60312f2f7e9be3411b1b02006cc948608b6f1adc92e6222513a6f127"} Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.898522 4871 generic.go:334] "Generic (PLEG): container finished" podID="81a8ea88-b9fc-4ae8-8240-8b0320045b7e" containerID="e2eaa060b372c1c5bfe49247194ab7c2cd6f6afff63c79bd792c00ca7f2e488c" exitCode=0 Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.898626 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"81a8ea88-b9fc-4ae8-8240-8b0320045b7e","Type":"ContainerDied","Data":"e2eaa060b372c1c5bfe49247194ab7c2cd6f6afff63c79bd792c00ca7f2e488c"} Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.900377 4871 generic.go:334] "Generic (PLEG): container finished" podID="add5ed23-8690-460e-ad71-166ae220ce1d" containerID="858a88d12ade42e72042019ba685c46e844c1d81a7e16a9962ce584e2cbe2ade" exitCode=0 Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.900670 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"add5ed23-8690-460e-ad71-166ae220ce1d","Type":"ContainerDied","Data":"858a88d12ade42e72042019ba685c46e844c1d81a7e16a9962ce584e2cbe2ade"} Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.933466 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-jn4dg" Jan 28 15:20:00 crc kubenswrapper[4871]: I0128 15:20:00.960864 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xdhpp" Jan 28 15:20:01 crc kubenswrapper[4871]: I0128 15:20:01.022637 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t2stq"] Jan 28 15:20:01 crc kubenswrapper[4871]: I0128 15:20:01.023776 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2stq" Jan 28 15:20:01 crc kubenswrapper[4871]: I0128 15:20:01.043242 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t2stq"] Jan 28 15:20:01 crc kubenswrapper[4871]: I0128 15:20:01.087513 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9695cd9-dbde-4846-a087-7224a1ece561-utilities\") pod \"redhat-operators-t2stq\" (UID: \"d9695cd9-dbde-4846-a087-7224a1ece561\") " pod="openshift-marketplace/redhat-operators-t2stq" Jan 28 15:20:01 crc kubenswrapper[4871]: I0128 15:20:01.087643 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9695cd9-dbde-4846-a087-7224a1ece561-catalog-content\") pod \"redhat-operators-t2stq\" (UID: \"d9695cd9-dbde-4846-a087-7224a1ece561\") " pod="openshift-marketplace/redhat-operators-t2stq" Jan 28 15:20:01 crc kubenswrapper[4871]: I0128 15:20:01.087664 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmt8w\" (UniqueName: \"kubernetes.io/projected/d9695cd9-dbde-4846-a087-7224a1ece561-kube-api-access-xmt8w\") pod \"redhat-operators-t2stq\" (UID: \"d9695cd9-dbde-4846-a087-7224a1ece561\") " pod="openshift-marketplace/redhat-operators-t2stq" Jan 28 15:20:01 crc kubenswrapper[4871]: I0128 15:20:01.107527 4871 patch_prober.go:28] interesting pod/router-default-5444994796-9q8n5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:20:01 crc kubenswrapper[4871]: [-]has-synced failed: reason withheld Jan 28 15:20:01 crc kubenswrapper[4871]: [+]process-running ok Jan 28 15:20:01 crc kubenswrapper[4871]: healthz check failed Jan 28 15:20:01 crc kubenswrapper[4871]: I0128 15:20:01.107577 4871 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9q8n5" podUID="d48b65d8-de38-4d1f-9162-3f16e3b8401b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:20:01 crc kubenswrapper[4871]: I0128 15:20:01.191849 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9695cd9-dbde-4846-a087-7224a1ece561-utilities\") pod \"redhat-operators-t2stq\" (UID: \"d9695cd9-dbde-4846-a087-7224a1ece561\") " pod="openshift-marketplace/redhat-operators-t2stq" Jan 28 15:20:01 crc kubenswrapper[4871]: I0128 15:20:01.192292 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9695cd9-dbde-4846-a087-7224a1ece561-utilities\") pod \"redhat-operators-t2stq\" (UID: \"d9695cd9-dbde-4846-a087-7224a1ece561\") " pod="openshift-marketplace/redhat-operators-t2stq" Jan 28 15:20:01 crc kubenswrapper[4871]: I0128 15:20:01.193147 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9695cd9-dbde-4846-a087-7224a1ece561-catalog-content\") pod \"redhat-operators-t2stq\" (UID: \"d9695cd9-dbde-4846-a087-7224a1ece561\") " pod="openshift-marketplace/redhat-operators-t2stq" Jan 28 15:20:01 crc kubenswrapper[4871]: I0128 15:20:01.193173 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmt8w\" (UniqueName: \"kubernetes.io/projected/d9695cd9-dbde-4846-a087-7224a1ece561-kube-api-access-xmt8w\") pod \"redhat-operators-t2stq\" (UID: \"d9695cd9-dbde-4846-a087-7224a1ece561\") " pod="openshift-marketplace/redhat-operators-t2stq" Jan 28 15:20:01 crc kubenswrapper[4871]: I0128 15:20:01.193674 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9695cd9-dbde-4846-a087-7224a1ece561-catalog-content\") pod \"redhat-operators-t2stq\" (UID: \"d9695cd9-dbde-4846-a087-7224a1ece561\") " pod="openshift-marketplace/redhat-operators-t2stq" Jan 28 15:20:01 crc kubenswrapper[4871]: I0128 15:20:01.217230 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmt8w\" (UniqueName: \"kubernetes.io/projected/d9695cd9-dbde-4846-a087-7224a1ece561-kube-api-access-xmt8w\") pod \"redhat-operators-t2stq\" (UID: \"d9695cd9-dbde-4846-a087-7224a1ece561\") " pod="openshift-marketplace/redhat-operators-t2stq" Jan 28 15:20:01 crc kubenswrapper[4871]: I0128 15:20:01.356700 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2stq" Jan 28 15:20:01 crc kubenswrapper[4871]: I0128 15:20:01.360344 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xdhpp"] Jan 28 15:20:01 crc kubenswrapper[4871]: I0128 15:20:01.862503 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t2stq"] Jan 28 15:20:01 crc kubenswrapper[4871]: W0128 15:20:01.886285 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9695cd9_dbde_4846_a087_7224a1ece561.slice/crio-49f7af796eb9978c134f9b7aae1083ca7bcedec5a419831b9e6cd88f7297da9e WatchSource:0}: Error finding container 49f7af796eb9978c134f9b7aae1083ca7bcedec5a419831b9e6cd88f7297da9e: Status 404 returned error can't find the container with id 49f7af796eb9978c134f9b7aae1083ca7bcedec5a419831b9e6cd88f7297da9e Jan 28 15:20:01 crc kubenswrapper[4871]: I0128 15:20:01.912800 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2stq" event={"ID":"d9695cd9-dbde-4846-a087-7224a1ece561","Type":"ContainerStarted","Data":"49f7af796eb9978c134f9b7aae1083ca7bcedec5a419831b9e6cd88f7297da9e"} Jan 28 15:20:01 crc kubenswrapper[4871]: I0128 15:20:01.917019 4871 generic.go:334] "Generic (PLEG): container finished" podID="9524a883-d083-471d-8c30-866172b8456e" containerID="4211dd4d67ec640f0657d91b5179acf5ee27c54fb03a1e66f3304ac1f93364cf" exitCode=0 Jan 28 15:20:01 crc kubenswrapper[4871]: I0128 15:20:01.917089 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdhpp" event={"ID":"9524a883-d083-471d-8c30-866172b8456e","Type":"ContainerDied","Data":"4211dd4d67ec640f0657d91b5179acf5ee27c54fb03a1e66f3304ac1f93364cf"} Jan 28 15:20:01 crc kubenswrapper[4871]: I0128 15:20:01.917159 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdhpp" event={"ID":"9524a883-d083-471d-8c30-866172b8456e","Type":"ContainerStarted","Data":"6c08175738eb3d7af5bda765afa4f9db8ada5c4bb65e567f1ba9dcc5f6a85355"} Jan 28 15:20:01 crc kubenswrapper[4871]: I0128 15:20:01.920929 4871 generic.go:334] "Generic (PLEG): container finished" podID="ad449099-f3be-4711-8c75-a8fab2eabda3" containerID="79cc46a1eca7a4e33f4dd296c5858b950b28cd2fd1673f2a40414cd660aed505" exitCode=0 Jan 28 15:20:01 crc kubenswrapper[4871]: I0128 15:20:01.921041 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-5cqh5" event={"ID":"ad449099-f3be-4711-8c75-a8fab2eabda3","Type":"ContainerDied","Data":"79cc46a1eca7a4e33f4dd296c5858b950b28cd2fd1673f2a40414cd660aed505"} Jan 28 15:20:02 crc kubenswrapper[4871]: I0128 15:20:02.110075 4871 patch_prober.go:28] interesting pod/router-default-5444994796-9q8n5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:20:02 crc kubenswrapper[4871]: [-]has-synced failed: reason withheld Jan 28 15:20:02 crc kubenswrapper[4871]: [+]process-running ok Jan 28 15:20:02 crc kubenswrapper[4871]: healthz check failed Jan 28 15:20:02 crc kubenswrapper[4871]: I0128 15:20:02.110135 4871 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9q8n5" podUID="d48b65d8-de38-4d1f-9162-3f16e3b8401b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:20:02 crc kubenswrapper[4871]: I0128 15:20:02.171091 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:20:02 crc kubenswrapper[4871]: I0128 15:20:02.189959 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:20:02 crc kubenswrapper[4871]: I0128 15:20:02.238447 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81a8ea88-b9fc-4ae8-8240-8b0320045b7e-kube-api-access\") pod \"81a8ea88-b9fc-4ae8-8240-8b0320045b7e\" (UID: \"81a8ea88-b9fc-4ae8-8240-8b0320045b7e\") " Jan 28 15:20:02 crc kubenswrapper[4871]: I0128 15:20:02.238506 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/add5ed23-8690-460e-ad71-166ae220ce1d-kube-api-access\") pod \"add5ed23-8690-460e-ad71-166ae220ce1d\" (UID: \"add5ed23-8690-460e-ad71-166ae220ce1d\") " Jan 28 15:20:02 crc kubenswrapper[4871]: I0128 15:20:02.238951 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/add5ed23-8690-460e-ad71-166ae220ce1d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "add5ed23-8690-460e-ad71-166ae220ce1d" (UID: "add5ed23-8690-460e-ad71-166ae220ce1d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:20:02 crc kubenswrapper[4871]: I0128 15:20:02.239694 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/add5ed23-8690-460e-ad71-166ae220ce1d-kubelet-dir\") pod \"add5ed23-8690-460e-ad71-166ae220ce1d\" (UID: \"add5ed23-8690-460e-ad71-166ae220ce1d\") " Jan 28 15:20:02 crc kubenswrapper[4871]: I0128 15:20:02.239773 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81a8ea88-b9fc-4ae8-8240-8b0320045b7e-kubelet-dir\") pod \"81a8ea88-b9fc-4ae8-8240-8b0320045b7e\" (UID: \"81a8ea88-b9fc-4ae8-8240-8b0320045b7e\") " Jan 28 15:20:02 crc kubenswrapper[4871]: I0128 15:20:02.240016 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a8ea88-b9fc-4ae8-8240-8b0320045b7e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "81a8ea88-b9fc-4ae8-8240-8b0320045b7e" (UID: "81a8ea88-b9fc-4ae8-8240-8b0320045b7e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:20:02 crc kubenswrapper[4871]: I0128 15:20:02.246004 4871 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/add5ed23-8690-460e-ad71-166ae220ce1d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:02 crc kubenswrapper[4871]: I0128 15:20:02.246036 4871 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81a8ea88-b9fc-4ae8-8240-8b0320045b7e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:02 crc kubenswrapper[4871]: I0128 15:20:02.250444 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/add5ed23-8690-460e-ad71-166ae220ce1d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "add5ed23-8690-460e-ad71-166ae220ce1d" (UID: "add5ed23-8690-460e-ad71-166ae220ce1d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:20:02 crc kubenswrapper[4871]: I0128 15:20:02.251577 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81a8ea88-b9fc-4ae8-8240-8b0320045b7e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "81a8ea88-b9fc-4ae8-8240-8b0320045b7e" (UID: "81a8ea88-b9fc-4ae8-8240-8b0320045b7e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:20:02 crc kubenswrapper[4871]: I0128 15:20:02.347268 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81a8ea88-b9fc-4ae8-8240-8b0320045b7e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:02 crc kubenswrapper[4871]: I0128 15:20:02.347563 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/add5ed23-8690-460e-ad71-166ae220ce1d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:02 crc kubenswrapper[4871]: I0128 15:20:02.949863 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"81a8ea88-b9fc-4ae8-8240-8b0320045b7e","Type":"ContainerDied","Data":"2e8309e43070d44041803a4b75de55c53abe268a91f23f9f0dd854192afdb7f8"} Jan 28 15:20:02 crc kubenswrapper[4871]: I0128 15:20:02.949915 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e8309e43070d44041803a4b75de55c53abe268a91f23f9f0dd854192afdb7f8" Jan 28 15:20:02 crc kubenswrapper[4871]: I0128 15:20:02.950486 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:20:02 crc kubenswrapper[4871]: I0128 15:20:02.963571 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:20:02 crc kubenswrapper[4871]: I0128 15:20:02.963618 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"add5ed23-8690-460e-ad71-166ae220ce1d","Type":"ContainerDied","Data":"b48fe2c01af3734c960c778790df5d997e4d6555e4b4a4a360b86fbcd32a98eb"} Jan 28 15:20:02 crc kubenswrapper[4871]: I0128 15:20:02.963647 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b48fe2c01af3734c960c778790df5d997e4d6555e4b4a4a360b86fbcd32a98eb" Jan 28 15:20:02 crc kubenswrapper[4871]: I0128 15:20:02.966917 4871 generic.go:334] "Generic (PLEG): container finished" podID="d9695cd9-dbde-4846-a087-7224a1ece561" containerID="e4d794b793e0c32c9ccf07cc943f01f1c6872769b93868c39e78da8849e46976" exitCode=0 Jan 28 15:20:02 crc kubenswrapper[4871]: I0128 15:20:02.967003 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2stq" event={"ID":"d9695cd9-dbde-4846-a087-7224a1ece561","Type":"ContainerDied","Data":"e4d794b793e0c32c9ccf07cc943f01f1c6872769b93868c39e78da8849e46976"} Jan 28 15:20:03 crc kubenswrapper[4871]: I0128 15:20:03.101546 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-9q8n5" Jan 28 15:20:03 crc kubenswrapper[4871]: I0128 15:20:03.107732 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-9q8n5" Jan 28 15:20:03 crc kubenswrapper[4871]: I0128 15:20:03.328736 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-5cqh5" Jan 28 15:20:03 crc kubenswrapper[4871]: I0128 15:20:03.386831 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad449099-f3be-4711-8c75-a8fab2eabda3-config-volume\") pod \"ad449099-f3be-4711-8c75-a8fab2eabda3\" (UID: \"ad449099-f3be-4711-8c75-a8fab2eabda3\") " Jan 28 15:20:03 crc kubenswrapper[4871]: I0128 15:20:03.387016 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad449099-f3be-4711-8c75-a8fab2eabda3-secret-volume\") pod \"ad449099-f3be-4711-8c75-a8fab2eabda3\" (UID: \"ad449099-f3be-4711-8c75-a8fab2eabda3\") " Jan 28 15:20:03 crc kubenswrapper[4871]: I0128 15:20:03.387056 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-862qt\" (UniqueName: \"kubernetes.io/projected/ad449099-f3be-4711-8c75-a8fab2eabda3-kube-api-access-862qt\") pod \"ad449099-f3be-4711-8c75-a8fab2eabda3\" (UID: \"ad449099-f3be-4711-8c75-a8fab2eabda3\") " Jan 28 15:20:03 crc kubenswrapper[4871]: I0128 15:20:03.387618 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad449099-f3be-4711-8c75-a8fab2eabda3-config-volume" (OuterVolumeSpecName: "config-volume") pod "ad449099-f3be-4711-8c75-a8fab2eabda3" (UID: "ad449099-f3be-4711-8c75-a8fab2eabda3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:20:03 crc kubenswrapper[4871]: I0128 15:20:03.401062 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad449099-f3be-4711-8c75-a8fab2eabda3-kube-api-access-862qt" (OuterVolumeSpecName: "kube-api-access-862qt") pod "ad449099-f3be-4711-8c75-a8fab2eabda3" (UID: "ad449099-f3be-4711-8c75-a8fab2eabda3"). InnerVolumeSpecName "kube-api-access-862qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:20:03 crc kubenswrapper[4871]: I0128 15:20:03.402042 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad449099-f3be-4711-8c75-a8fab2eabda3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ad449099-f3be-4711-8c75-a8fab2eabda3" (UID: "ad449099-f3be-4711-8c75-a8fab2eabda3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:20:03 crc kubenswrapper[4871]: I0128 15:20:03.488611 4871 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad449099-f3be-4711-8c75-a8fab2eabda3-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:03 crc kubenswrapper[4871]: I0128 15:20:03.488645 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-862qt\" (UniqueName: \"kubernetes.io/projected/ad449099-f3be-4711-8c75-a8fab2eabda3-kube-api-access-862qt\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:03 crc kubenswrapper[4871]: I0128 15:20:03.488654 4871 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad449099-f3be-4711-8c75-a8fab2eabda3-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:03 crc kubenswrapper[4871]: I0128 15:20:03.986390 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-5cqh5" Jan 28 15:20:03 crc kubenswrapper[4871]: I0128 15:20:03.986402 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-5cqh5" event={"ID":"ad449099-f3be-4711-8c75-a8fab2eabda3","Type":"ContainerDied","Data":"e5e08a4815f21ec7ad5f2c92ee45e8648c497f8661d58d7ab3d2e8f9d54fd623"} Jan 28 15:20:03 crc kubenswrapper[4871]: I0128 15:20:03.986957 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5e08a4815f21ec7ad5f2c92ee45e8648c497f8661d58d7ab3d2e8f9d54fd623" Jan 28 15:20:05 crc kubenswrapper[4871]: I0128 15:20:05.849651 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-qxtnb" Jan 28 15:20:10 crc kubenswrapper[4871]: I0128 15:20:10.498372 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:20:10 crc kubenswrapper[4871]: I0128 15:20:10.502080 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:20:10 crc kubenswrapper[4871]: I0128 15:20:10.767715 4871 patch_prober.go:28] interesting pod/downloads-7954f5f757-zdfpg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 28 15:20:10 crc kubenswrapper[4871]: I0128 15:20:10.767782 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zdfpg" podUID="3dcb49be-1798-4698-9d3c-39bf78d992e6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 28 15:20:10 crc kubenswrapper[4871]: I0128 15:20:10.767959 4871 patch_prober.go:28] interesting pod/downloads-7954f5f757-zdfpg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 28 15:20:10 crc kubenswrapper[4871]: I0128 15:20:10.769377 4871 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zdfpg" podUID="3dcb49be-1798-4698-9d3c-39bf78d992e6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 28 15:20:11 crc kubenswrapper[4871]: I0128 15:20:11.216535 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs\") pod \"network-metrics-daemon-jp46k\" (UID: \"64aa044d-1eb6-4e5f-9c12-96ba346374fa\") " pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:20:11 crc kubenswrapper[4871]: I0128 15:20:11.225224 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64aa044d-1eb6-4e5f-9c12-96ba346374fa-metrics-certs\") pod \"network-metrics-daemon-jp46k\" (UID: \"64aa044d-1eb6-4e5f-9c12-96ba346374fa\") " pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:20:11 crc kubenswrapper[4871]: I0128 15:20:11.231134 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jp46k" Jan 28 15:20:13 crc kubenswrapper[4871]: I0128 15:20:13.814310 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:20:13 crc kubenswrapper[4871]: I0128 15:20:13.815070 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:20:18 crc kubenswrapper[4871]: I0128 15:20:18.510847 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:20:20 crc kubenswrapper[4871]: I0128 15:20:20.783202 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-zdfpg" Jan 28 15:20:30 crc kubenswrapper[4871]: I0128 15:20:30.526698 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8xpb5" Jan 28 15:20:32 crc kubenswrapper[4871]: E0128 15:20:32.552343 4871 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 15:20:32 crc kubenswrapper[4871]: E0128 15:20:32.552829 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4qsb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-mjks9_openshift-marketplace(e209b4f2-aaea-496b-8e14-58f2aa8faaa5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:20:32 crc kubenswrapper[4871]: E0128 15:20:32.554173 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-mjks9" podUID="e209b4f2-aaea-496b-8e14-58f2aa8faaa5" Jan 28 15:20:32 crc kubenswrapper[4871]: E0128 15:20:32.701963 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-mjks9" podUID="e209b4f2-aaea-496b-8e14-58f2aa8faaa5" Jan 28 15:20:32 crc kubenswrapper[4871]: E0128 15:20:32.769799 4871 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 15:20:32 crc kubenswrapper[4871]: E0128 15:20:32.770415 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lkdlw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ldxz9_openshift-marketplace(f7296424-ed41-429c-8e45-599795442f1d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:20:32 crc kubenswrapper[4871]: E0128 15:20:32.771620 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ldxz9" podUID="f7296424-ed41-429c-8e45-599795442f1d" Jan 28 15:20:34 crc kubenswrapper[4871]: E0128 15:20:34.423850 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ldxz9" podUID="f7296424-ed41-429c-8e45-599795442f1d" Jan 28 15:20:34 crc kubenswrapper[4871]: E0128 15:20:34.524158 4871 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 28 15:20:34 crc kubenswrapper[4871]: E0128 15:20:34.524319 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-28fjt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pxfdt_openshift-marketplace(67d8b5ae-67a3-4ca7-b74e-35e5cc45d519): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:20:34 crc kubenswrapper[4871]: E0128 15:20:34.526256 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-pxfdt" podUID="67d8b5ae-67a3-4ca7-b74e-35e5cc45d519" Jan 28 15:20:34 crc kubenswrapper[4871]: E0128 15:20:34.535630 4871 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 28 15:20:34 crc kubenswrapper[4871]: E0128 15:20:34.535754 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tfvtx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-g7btj_openshift-marketplace(9b83a5ec-9ed6-4e66-9a39-610a39f64d19): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:20:34 crc kubenswrapper[4871]: E0128 15:20:34.537957 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-g7btj" podUID="9b83a5ec-9ed6-4e66-9a39-610a39f64d19" Jan 28 15:20:34 crc kubenswrapper[4871]: E0128 15:20:34.543155 4871 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 28 15:20:34 crc kubenswrapper[4871]: E0128 15:20:34.543247 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cpwh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-7rpqt_openshift-marketplace(857f6040-72fa-4f10-97e4-2f4e4e2198a4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:20:34 crc kubenswrapper[4871]: E0128 15:20:34.544526 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-7rpqt" podUID="857f6040-72fa-4f10-97e4-2f4e4e2198a4" Jan 28 15:20:36 crc kubenswrapper[4871]: I0128 15:20:36.428553 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:37 crc kubenswrapper[4871]: I0128 15:20:37.046341 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 15:20:37 crc kubenswrapper[4871]: E0128 15:20:37.047155 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81a8ea88-b9fc-4ae8-8240-8b0320045b7e" containerName="pruner" Jan 28 15:20:37 crc kubenswrapper[4871]: I0128 15:20:37.047262 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="81a8ea88-b9fc-4ae8-8240-8b0320045b7e" containerName="pruner" Jan 28 15:20:37 crc kubenswrapper[4871]: E0128 15:20:37.047378 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add5ed23-8690-460e-ad71-166ae220ce1d" containerName="pruner" Jan 28 15:20:37 crc kubenswrapper[4871]: I0128 15:20:37.047462 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="add5ed23-8690-460e-ad71-166ae220ce1d" containerName="pruner" Jan 28 15:20:37 crc kubenswrapper[4871]: E0128 15:20:37.047551 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad449099-f3be-4711-8c75-a8fab2eabda3" containerName="collect-profiles" Jan 28 15:20:37 crc kubenswrapper[4871]: I0128 15:20:37.047668 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad449099-f3be-4711-8c75-a8fab2eabda3" containerName="collect-profiles" Jan 28 15:20:37 crc kubenswrapper[4871]: I0128 15:20:37.047920 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad449099-f3be-4711-8c75-a8fab2eabda3" containerName="collect-profiles" Jan 28 15:20:37 crc kubenswrapper[4871]: I0128 15:20:37.048022 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="81a8ea88-b9fc-4ae8-8240-8b0320045b7e" containerName="pruner" Jan 28 15:20:37 crc kubenswrapper[4871]: I0128 15:20:37.048119 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="add5ed23-8690-460e-ad71-166ae220ce1d" containerName="pruner" Jan 28 15:20:37 crc kubenswrapper[4871]: I0128 15:20:37.048685 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:20:37 crc kubenswrapper[4871]: I0128 15:20:37.049084 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 15:20:37 crc kubenswrapper[4871]: I0128 15:20:37.052284 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 15:20:37 crc kubenswrapper[4871]: I0128 15:20:37.052561 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 15:20:37 crc kubenswrapper[4871]: I0128 15:20:37.161118 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ed14085-7d7e-4900-9c6a-c43509419bfa-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7ed14085-7d7e-4900-9c6a-c43509419bfa\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:20:37 crc kubenswrapper[4871]: I0128 15:20:37.161162 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ed14085-7d7e-4900-9c6a-c43509419bfa-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7ed14085-7d7e-4900-9c6a-c43509419bfa\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:20:37 crc kubenswrapper[4871]: I0128 15:20:37.262677 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ed14085-7d7e-4900-9c6a-c43509419bfa-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7ed14085-7d7e-4900-9c6a-c43509419bfa\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:20:37 crc kubenswrapper[4871]: I0128 15:20:37.263000 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ed14085-7d7e-4900-9c6a-c43509419bfa-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7ed14085-7d7e-4900-9c6a-c43509419bfa\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:20:37 crc kubenswrapper[4871]: I0128 15:20:37.262777 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ed14085-7d7e-4900-9c6a-c43509419bfa-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7ed14085-7d7e-4900-9c6a-c43509419bfa\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:20:37 crc kubenswrapper[4871]: I0128 15:20:37.297755 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ed14085-7d7e-4900-9c6a-c43509419bfa-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7ed14085-7d7e-4900-9c6a-c43509419bfa\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:20:37 crc kubenswrapper[4871]: I0128 15:20:37.381684 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:20:37 crc kubenswrapper[4871]: E0128 15:20:37.981775 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-g7btj" podUID="9b83a5ec-9ed6-4e66-9a39-610a39f64d19" Jan 28 15:20:37 crc kubenswrapper[4871]: E0128 15:20:37.981847 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pxfdt" podUID="67d8b5ae-67a3-4ca7-b74e-35e5cc45d519" Jan 28 15:20:37 crc kubenswrapper[4871]: E0128 15:20:37.981956 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-7rpqt" podUID="857f6040-72fa-4f10-97e4-2f4e4e2198a4" Jan 28 15:20:38 crc kubenswrapper[4871]: E0128 15:20:38.099634 4871 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 15:20:38 crc kubenswrapper[4871]: E0128 15:20:38.099978 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmt8w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-t2stq_openshift-marketplace(d9695cd9-dbde-4846-a087-7224a1ece561): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:20:38 crc kubenswrapper[4871]: E0128 15:20:38.101598 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-t2stq" podUID="d9695cd9-dbde-4846-a087-7224a1ece561" Jan 28 15:20:38 crc kubenswrapper[4871]: E0128 15:20:38.105280 4871 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 15:20:38 crc kubenswrapper[4871]: E0128 15:20:38.105422 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fcrhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-xdhpp_openshift-marketplace(9524a883-d083-471d-8c30-866172b8456e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:20:38 crc kubenswrapper[4871]: E0128 15:20:38.106631 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-xdhpp" podUID="9524a883-d083-471d-8c30-866172b8456e" Jan 28 15:20:38 crc kubenswrapper[4871]: I0128 15:20:38.450719 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jp46k"] Jan 28 15:20:38 crc kubenswrapper[4871]: I0128 15:20:38.454274 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 15:20:38 crc kubenswrapper[4871]: W0128 15:20:38.464582 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64aa044d_1eb6_4e5f_9c12_96ba346374fa.slice/crio-fec7033d5bec297f3d59835783eda89183288f3cd997e32b38dbdfba13f6edcb WatchSource:0}: Error finding container fec7033d5bec297f3d59835783eda89183288f3cd997e32b38dbdfba13f6edcb: Status 404 returned error can't find the container with id fec7033d5bec297f3d59835783eda89183288f3cd997e32b38dbdfba13f6edcb Jan 28 15:20:38 crc kubenswrapper[4871]: I0128 15:20:38.704642 4871 generic.go:334] "Generic (PLEG): container finished" podID="6575ba24-dfe2-4f55-96ee-6692928debdd" containerID="a828db6bac502002c6b0a6309a397244276fd3a021809cd9d6faae9ab4eb1b49" exitCode=0 Jan 28 15:20:38 crc kubenswrapper[4871]: I0128 15:20:38.704708 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9dhs" event={"ID":"6575ba24-dfe2-4f55-96ee-6692928debdd","Type":"ContainerDied","Data":"a828db6bac502002c6b0a6309a397244276fd3a021809cd9d6faae9ab4eb1b49"} Jan 28 15:20:38 crc kubenswrapper[4871]: I0128 15:20:38.705974 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7ed14085-7d7e-4900-9c6a-c43509419bfa","Type":"ContainerStarted","Data":"132f4242905266cad733607bb8091d9ac57cda0daf6c70d0ae49e3210bf4084e"} Jan 28 15:20:38 crc kubenswrapper[4871]: I0128 15:20:38.708127 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jp46k" event={"ID":"64aa044d-1eb6-4e5f-9c12-96ba346374fa","Type":"ContainerStarted","Data":"2a922581b5a615624eb6c8c2bf5720d2480108dbc132a81ac165a97de51462c6"} Jan 28 15:20:38 crc kubenswrapper[4871]: I0128 15:20:38.708172 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jp46k" event={"ID":"64aa044d-1eb6-4e5f-9c12-96ba346374fa","Type":"ContainerStarted","Data":"fec7033d5bec297f3d59835783eda89183288f3cd997e32b38dbdfba13f6edcb"} Jan 28 15:20:38 crc kubenswrapper[4871]: E0128 15:20:38.716527 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-t2stq" podUID="d9695cd9-dbde-4846-a087-7224a1ece561" Jan 28 15:20:39 crc kubenswrapper[4871]: I0128 15:20:39.726166 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9dhs" event={"ID":"6575ba24-dfe2-4f55-96ee-6692928debdd","Type":"ContainerStarted","Data":"1a3764765ae722333d10f0e1d375a7e73470d0cea68ac126b26f4db0d7353731"} Jan 28 15:20:39 crc kubenswrapper[4871]: I0128 15:20:39.730382 4871 generic.go:334] "Generic (PLEG): container finished" podID="7ed14085-7d7e-4900-9c6a-c43509419bfa" containerID="b682d6144cda306140b536c6460ccc0cc7fb6fa5047a0642fb34915144bc63ba" exitCode=0 Jan 28 15:20:39 crc kubenswrapper[4871]: I0128 15:20:39.730456 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7ed14085-7d7e-4900-9c6a-c43509419bfa","Type":"ContainerDied","Data":"b682d6144cda306140b536c6460ccc0cc7fb6fa5047a0642fb34915144bc63ba"} Jan 28 15:20:39 crc kubenswrapper[4871]: I0128 15:20:39.735872 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jp46k" event={"ID":"64aa044d-1eb6-4e5f-9c12-96ba346374fa","Type":"ContainerStarted","Data":"a12fd936c939c17e9abf01a017304b8dc335a0cad2db4f4ad8b8a803303c79c1"} Jan 28 15:20:39 crc kubenswrapper[4871]: I0128 15:20:39.758202 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j9dhs" podStartSLOduration=2.439329167 podStartE2EDuration="40.758185007s" podCreationTimestamp="2026-01-28 15:19:59 +0000 UTC" firstStartedPulling="2026-01-28 15:20:00.904245576 +0000 UTC m=+152.800083898" lastFinishedPulling="2026-01-28 15:20:39.223101406 +0000 UTC m=+191.118939738" observedRunningTime="2026-01-28 15:20:39.757377261 +0000 UTC m=+191.653215593" watchObservedRunningTime="2026-01-28 15:20:39.758185007 +0000 UTC m=+191.654023329" Jan 28 15:20:39 crc kubenswrapper[4871]: I0128 15:20:39.779574 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-jp46k" podStartSLOduration=170.779556278 podStartE2EDuration="2m50.779556278s" podCreationTimestamp="2026-01-28 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:39.777301566 +0000 UTC m=+191.673139898" watchObservedRunningTime="2026-01-28 15:20:39.779556278 +0000 UTC m=+191.675394600" Jan 28 15:20:41 crc kubenswrapper[4871]: I0128 15:20:41.056272 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:20:41 crc kubenswrapper[4871]: I0128 15:20:41.242992 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ed14085-7d7e-4900-9c6a-c43509419bfa-kubelet-dir\") pod \"7ed14085-7d7e-4900-9c6a-c43509419bfa\" (UID: \"7ed14085-7d7e-4900-9c6a-c43509419bfa\") " Jan 28 15:20:41 crc kubenswrapper[4871]: I0128 15:20:41.243123 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ed14085-7d7e-4900-9c6a-c43509419bfa-kube-api-access\") pod \"7ed14085-7d7e-4900-9c6a-c43509419bfa\" (UID: \"7ed14085-7d7e-4900-9c6a-c43509419bfa\") " Jan 28 15:20:41 crc kubenswrapper[4871]: I0128 15:20:41.243218 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ed14085-7d7e-4900-9c6a-c43509419bfa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7ed14085-7d7e-4900-9c6a-c43509419bfa" (UID: "7ed14085-7d7e-4900-9c6a-c43509419bfa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:20:41 crc kubenswrapper[4871]: I0128 15:20:41.243518 4871 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ed14085-7d7e-4900-9c6a-c43509419bfa-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:41 crc kubenswrapper[4871]: I0128 15:20:41.250731 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ed14085-7d7e-4900-9c6a-c43509419bfa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7ed14085-7d7e-4900-9c6a-c43509419bfa" (UID: "7ed14085-7d7e-4900-9c6a-c43509419bfa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:20:41 crc kubenswrapper[4871]: I0128 15:20:41.344547 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ed14085-7d7e-4900-9c6a-c43509419bfa-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:41 crc kubenswrapper[4871]: I0128 15:20:41.747082 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7ed14085-7d7e-4900-9c6a-c43509419bfa","Type":"ContainerDied","Data":"132f4242905266cad733607bb8091d9ac57cda0daf6c70d0ae49e3210bf4084e"} Jan 28 15:20:41 crc kubenswrapper[4871]: I0128 15:20:41.747403 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="132f4242905266cad733607bb8091d9ac57cda0daf6c70d0ae49e3210bf4084e" Jan 28 15:20:41 crc kubenswrapper[4871]: I0128 15:20:41.747124 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:20:43 crc kubenswrapper[4871]: I0128 15:20:43.440953 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 15:20:43 crc kubenswrapper[4871]: E0128 15:20:43.441146 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ed14085-7d7e-4900-9c6a-c43509419bfa" containerName="pruner" Jan 28 15:20:43 crc kubenswrapper[4871]: I0128 15:20:43.441156 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ed14085-7d7e-4900-9c6a-c43509419bfa" containerName="pruner" Jan 28 15:20:43 crc kubenswrapper[4871]: I0128 15:20:43.441262 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ed14085-7d7e-4900-9c6a-c43509419bfa" containerName="pruner" Jan 28 15:20:43 crc kubenswrapper[4871]: I0128 15:20:43.441581 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:20:43 crc kubenswrapper[4871]: I0128 15:20:43.444921 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 15:20:43 crc kubenswrapper[4871]: I0128 15:20:43.445168 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 15:20:43 crc kubenswrapper[4871]: I0128 15:20:43.452862 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 15:20:43 crc kubenswrapper[4871]: I0128 15:20:43.474626 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaebff6e-df1b-477a-8eda-dd86e54561b0-kube-api-access\") pod \"installer-9-crc\" (UID: \"eaebff6e-df1b-477a-8eda-dd86e54561b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:20:43 crc kubenswrapper[4871]: I0128 15:20:43.474687 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eaebff6e-df1b-477a-8eda-dd86e54561b0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"eaebff6e-df1b-477a-8eda-dd86e54561b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:20:43 crc kubenswrapper[4871]: I0128 15:20:43.474710 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eaebff6e-df1b-477a-8eda-dd86e54561b0-var-lock\") pod \"installer-9-crc\" (UID: \"eaebff6e-df1b-477a-8eda-dd86e54561b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:20:43 crc kubenswrapper[4871]: I0128 15:20:43.575439 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaebff6e-df1b-477a-8eda-dd86e54561b0-kube-api-access\") pod \"installer-9-crc\" (UID: \"eaebff6e-df1b-477a-8eda-dd86e54561b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:20:43 crc kubenswrapper[4871]: I0128 15:20:43.575509 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eaebff6e-df1b-477a-8eda-dd86e54561b0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"eaebff6e-df1b-477a-8eda-dd86e54561b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:20:43 crc kubenswrapper[4871]: I0128 15:20:43.575532 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eaebff6e-df1b-477a-8eda-dd86e54561b0-var-lock\") pod \"installer-9-crc\" (UID: \"eaebff6e-df1b-477a-8eda-dd86e54561b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:20:43 crc kubenswrapper[4871]: I0128 15:20:43.575671 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eaebff6e-df1b-477a-8eda-dd86e54561b0-var-lock\") pod \"installer-9-crc\" (UID: \"eaebff6e-df1b-477a-8eda-dd86e54561b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:20:43 crc kubenswrapper[4871]: I0128 15:20:43.575786 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eaebff6e-df1b-477a-8eda-dd86e54561b0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"eaebff6e-df1b-477a-8eda-dd86e54561b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:20:43 crc kubenswrapper[4871]: I0128 15:20:43.592180 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaebff6e-df1b-477a-8eda-dd86e54561b0-kube-api-access\") pod \"installer-9-crc\" (UID: \"eaebff6e-df1b-477a-8eda-dd86e54561b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:20:43 crc kubenswrapper[4871]: I0128 15:20:43.769455 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:20:43 crc kubenswrapper[4871]: I0128 15:20:43.814059 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:20:43 crc kubenswrapper[4871]: I0128 15:20:43.814133 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:20:44 crc kubenswrapper[4871]: I0128 15:20:44.170773 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 15:20:44 crc kubenswrapper[4871]: I0128 15:20:44.763126 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"eaebff6e-df1b-477a-8eda-dd86e54561b0","Type":"ContainerStarted","Data":"81889f26b8916c02429856a7195aa1f64a819070e01afd3c7abdda9ecdf6908c"} Jan 28 15:20:44 crc kubenswrapper[4871]: I0128 15:20:44.764565 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"eaebff6e-df1b-477a-8eda-dd86e54561b0","Type":"ContainerStarted","Data":"d694ae1c5b6a7e692247bee7ed2420af2ebdc54e420e85c84ead0db3b8c18661"} Jan 28 15:20:44 crc kubenswrapper[4871]: I0128 15:20:44.783031 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.783009423 podStartE2EDuration="1.783009423s" podCreationTimestamp="2026-01-28 15:20:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:44.778486506 +0000 UTC m=+196.674324828" watchObservedRunningTime="2026-01-28 15:20:44.783009423 +0000 UTC m=+196.678847765" Jan 28 15:20:49 crc kubenswrapper[4871]: I0128 15:20:49.719793 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j9dhs" Jan 28 15:20:49 crc kubenswrapper[4871]: I0128 15:20:49.720353 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j9dhs" Jan 28 15:20:49 crc kubenswrapper[4871]: I0128 15:20:49.788086 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2stq" event={"ID":"d9695cd9-dbde-4846-a087-7224a1ece561","Type":"ContainerStarted","Data":"af2d24639943762f38449f2f782ee1fec05176b7a7236ce090f15b1c48a8a916"} Jan 28 15:20:49 crc kubenswrapper[4871]: I0128 15:20:49.791717 4871 generic.go:334] "Generic (PLEG): container finished" podID="e209b4f2-aaea-496b-8e14-58f2aa8faaa5" containerID="b9e99db464bc9e3a1e0b34932c94c2e7dae3694150811fc884fc475944dcc933" exitCode=0 Jan 28 15:20:49 crc kubenswrapper[4871]: I0128 15:20:49.791741 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mjks9" event={"ID":"e209b4f2-aaea-496b-8e14-58f2aa8faaa5","Type":"ContainerDied","Data":"b9e99db464bc9e3a1e0b34932c94c2e7dae3694150811fc884fc475944dcc933"} Jan 28 15:20:49 crc kubenswrapper[4871]: I0128 15:20:49.887323 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j9dhs" Jan 28 15:20:49 crc kubenswrapper[4871]: I0128 15:20:49.926953 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j9dhs" Jan 28 15:20:50 crc kubenswrapper[4871]: I0128 15:20:50.798959 4871 generic.go:334] "Generic (PLEG): container finished" podID="d9695cd9-dbde-4846-a087-7224a1ece561" containerID="af2d24639943762f38449f2f782ee1fec05176b7a7236ce090f15b1c48a8a916" exitCode=0 Jan 28 15:20:50 crc kubenswrapper[4871]: I0128 15:20:50.799052 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2stq" event={"ID":"d9695cd9-dbde-4846-a087-7224a1ece561","Type":"ContainerDied","Data":"af2d24639943762f38449f2f782ee1fec05176b7a7236ce090f15b1c48a8a916"} Jan 28 15:20:50 crc kubenswrapper[4871]: I0128 15:20:50.801017 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mjks9" event={"ID":"e209b4f2-aaea-496b-8e14-58f2aa8faaa5","Type":"ContainerStarted","Data":"0b16713e8946b700c358d9b1c6eb1db8567dd58bf9276af91cfe20a9bbd97011"} Jan 28 15:20:50 crc kubenswrapper[4871]: I0128 15:20:50.803159 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldxz9" event={"ID":"f7296424-ed41-429c-8e45-599795442f1d","Type":"ContainerStarted","Data":"15fe145a1bcd76556cf22aa2a2de710ddeceb20898131088a82ef555aa8afebb"} Jan 28 15:20:50 crc kubenswrapper[4871]: I0128 15:20:50.805787 4871 generic.go:334] "Generic (PLEG): container finished" podID="9b83a5ec-9ed6-4e66-9a39-610a39f64d19" containerID="30cd828a28c7ef5585485a7c4743d26c2c76c8c4e4bfc61cf754c927935c966f" exitCode=0 Jan 28 15:20:50 crc kubenswrapper[4871]: I0128 15:20:50.805893 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7btj" event={"ID":"9b83a5ec-9ed6-4e66-9a39-610a39f64d19","Type":"ContainerDied","Data":"30cd828a28c7ef5585485a7c4743d26c2c76c8c4e4bfc61cf754c927935c966f"} Jan 28 15:20:50 crc kubenswrapper[4871]: I0128 15:20:50.854722 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mjks9" podStartSLOduration=2.396311036 podStartE2EDuration="53.854702927s" podCreationTimestamp="2026-01-28 15:19:57 +0000 UTC" firstStartedPulling="2026-01-28 15:19:58.73067583 +0000 UTC m=+150.626514152" lastFinishedPulling="2026-01-28 15:20:50.189067701 +0000 UTC m=+202.084906043" observedRunningTime="2026-01-28 15:20:50.853092425 +0000 UTC m=+202.748930767" watchObservedRunningTime="2026-01-28 15:20:50.854702927 +0000 UTC m=+202.750541249" Jan 28 15:20:51 crc kubenswrapper[4871]: I0128 15:20:51.813667 4871 generic.go:334] "Generic (PLEG): container finished" podID="f7296424-ed41-429c-8e45-599795442f1d" containerID="15fe145a1bcd76556cf22aa2a2de710ddeceb20898131088a82ef555aa8afebb" exitCode=0 Jan 28 15:20:51 crc kubenswrapper[4871]: I0128 15:20:51.813761 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldxz9" event={"ID":"f7296424-ed41-429c-8e45-599795442f1d","Type":"ContainerDied","Data":"15fe145a1bcd76556cf22aa2a2de710ddeceb20898131088a82ef555aa8afebb"} Jan 28 15:20:51 crc kubenswrapper[4871]: I0128 15:20:51.816306 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7btj" event={"ID":"9b83a5ec-9ed6-4e66-9a39-610a39f64d19","Type":"ContainerStarted","Data":"40065d911288088f9fb363f8e4e01affe9a6970d5d2c6b4be3a1f393cc9f9efc"} Jan 28 15:20:51 crc kubenswrapper[4871]: I0128 15:20:51.818702 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2stq" event={"ID":"d9695cd9-dbde-4846-a087-7224a1ece561","Type":"ContainerStarted","Data":"097d9c58737912189db014b7f1878135a50e3aa42be87727aaa0391ca05ad154"} Jan 28 15:20:51 crc kubenswrapper[4871]: I0128 15:20:51.851923 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g7btj" podStartSLOduration=2.3212925970000002 podStartE2EDuration="54.851903566s" podCreationTimestamp="2026-01-28 15:19:57 +0000 UTC" firstStartedPulling="2026-01-28 15:19:58.751523673 +0000 UTC m=+150.647361995" lastFinishedPulling="2026-01-28 15:20:51.282134602 +0000 UTC m=+203.177972964" observedRunningTime="2026-01-28 15:20:51.849320142 +0000 UTC m=+203.745158474" watchObservedRunningTime="2026-01-28 15:20:51.851903566 +0000 UTC m=+203.747741888" Jan 28 15:20:51 crc kubenswrapper[4871]: I0128 15:20:51.868432 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t2stq" podStartSLOduration=3.193055474 podStartE2EDuration="51.868413432s" podCreationTimestamp="2026-01-28 15:20:00 +0000 UTC" firstStartedPulling="2026-01-28 15:20:02.975854984 +0000 UTC m=+154.871693306" lastFinishedPulling="2026-01-28 15:20:51.651212942 +0000 UTC m=+203.547051264" observedRunningTime="2026-01-28 15:20:51.865251969 +0000 UTC m=+203.761090301" watchObservedRunningTime="2026-01-28 15:20:51.868413432 +0000 UTC m=+203.764251754" Jan 28 15:20:52 crc kubenswrapper[4871]: I0128 15:20:52.826569 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldxz9" event={"ID":"f7296424-ed41-429c-8e45-599795442f1d","Type":"ContainerStarted","Data":"fe41996cf20c26455cf348915e1d6b0841af54a9ce03d297beb23d64b0a931a7"} Jan 28 15:20:52 crc kubenswrapper[4871]: I0128 15:20:52.833409 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdhpp" event={"ID":"9524a883-d083-471d-8c30-866172b8456e","Type":"ContainerStarted","Data":"f60d614295b1b9292926fcddb53c4d0c9e9e4eb0c70263ea02518ccca6085f6a"} Jan 28 15:20:52 crc kubenswrapper[4871]: I0128 15:20:52.845976 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ldxz9" podStartSLOduration=3.486505495 podStartE2EDuration="55.845959423s" podCreationTimestamp="2026-01-28 15:19:57 +0000 UTC" firstStartedPulling="2026-01-28 15:19:59.841676394 +0000 UTC m=+151.737514716" lastFinishedPulling="2026-01-28 15:20:52.201130322 +0000 UTC m=+204.096968644" observedRunningTime="2026-01-28 15:20:52.845313382 +0000 UTC m=+204.741151724" watchObservedRunningTime="2026-01-28 15:20:52.845959423 +0000 UTC m=+204.741797745" Jan 28 15:20:53 crc kubenswrapper[4871]: I0128 15:20:53.847656 4871 generic.go:334] "Generic (PLEG): container finished" podID="9524a883-d083-471d-8c30-866172b8456e" containerID="f60d614295b1b9292926fcddb53c4d0c9e9e4eb0c70263ea02518ccca6085f6a" exitCode=0 Jan 28 15:20:53 crc kubenswrapper[4871]: I0128 15:20:53.847722 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdhpp" event={"ID":"9524a883-d083-471d-8c30-866172b8456e","Type":"ContainerDied","Data":"f60d614295b1b9292926fcddb53c4d0c9e9e4eb0c70263ea02518ccca6085f6a"} Jan 28 15:20:57 crc kubenswrapper[4871]: I0128 15:20:57.789998 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g7btj" Jan 28 15:20:57 crc kubenswrapper[4871]: I0128 15:20:57.790775 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g7btj" Jan 28 15:20:57 crc kubenswrapper[4871]: I0128 15:20:57.840270 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g7btj" Jan 28 15:20:57 crc kubenswrapper[4871]: I0128 15:20:57.901687 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g7btj" Jan 28 15:20:57 crc kubenswrapper[4871]: I0128 15:20:57.918535 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mjks9" Jan 28 15:20:57 crc kubenswrapper[4871]: I0128 15:20:57.918613 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mjks9" Jan 28 15:20:57 crc kubenswrapper[4871]: I0128 15:20:57.972423 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mjks9" Jan 28 15:20:58 crc kubenswrapper[4871]: I0128 15:20:58.317886 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ldxz9" Jan 28 15:20:58 crc kubenswrapper[4871]: I0128 15:20:58.317965 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ldxz9" Jan 28 15:20:58 crc kubenswrapper[4871]: I0128 15:20:58.393974 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ldxz9" Jan 28 15:20:58 crc kubenswrapper[4871]: I0128 15:20:58.926636 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mjks9" Jan 28 15:20:58 crc kubenswrapper[4871]: I0128 15:20:58.941944 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ldxz9" Jan 28 15:21:00 crc kubenswrapper[4871]: I0128 15:21:00.336726 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ldxz9"] Jan 28 15:21:00 crc kubenswrapper[4871]: I0128 15:21:00.897286 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ldxz9" podUID="f7296424-ed41-429c-8e45-599795442f1d" containerName="registry-server" containerID="cri-o://fe41996cf20c26455cf348915e1d6b0841af54a9ce03d297beb23d64b0a931a7" gracePeriod=2 Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.357649 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t2stq" Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.357958 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t2stq" Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.416199 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t2stq" Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.445873 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ldxz9" Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.532026 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkdlw\" (UniqueName: \"kubernetes.io/projected/f7296424-ed41-429c-8e45-599795442f1d-kube-api-access-lkdlw\") pod \"f7296424-ed41-429c-8e45-599795442f1d\" (UID: \"f7296424-ed41-429c-8e45-599795442f1d\") " Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.532165 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7296424-ed41-429c-8e45-599795442f1d-catalog-content\") pod \"f7296424-ed41-429c-8e45-599795442f1d\" (UID: \"f7296424-ed41-429c-8e45-599795442f1d\") " Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.532240 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7296424-ed41-429c-8e45-599795442f1d-utilities\") pod \"f7296424-ed41-429c-8e45-599795442f1d\" (UID: \"f7296424-ed41-429c-8e45-599795442f1d\") " Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.533092 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7296424-ed41-429c-8e45-599795442f1d-utilities" (OuterVolumeSpecName: "utilities") pod "f7296424-ed41-429c-8e45-599795442f1d" (UID: "f7296424-ed41-429c-8e45-599795442f1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.542862 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7296424-ed41-429c-8e45-599795442f1d-kube-api-access-lkdlw" (OuterVolumeSpecName: "kube-api-access-lkdlw") pod "f7296424-ed41-429c-8e45-599795442f1d" (UID: "f7296424-ed41-429c-8e45-599795442f1d"). InnerVolumeSpecName "kube-api-access-lkdlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.584010 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7296424-ed41-429c-8e45-599795442f1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7296424-ed41-429c-8e45-599795442f1d" (UID: "f7296424-ed41-429c-8e45-599795442f1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.633830 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7296424-ed41-429c-8e45-599795442f1d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.634109 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7296424-ed41-429c-8e45-599795442f1d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.634119 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkdlw\" (UniqueName: \"kubernetes.io/projected/f7296424-ed41-429c-8e45-599795442f1d-kube-api-access-lkdlw\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.903217 4871 generic.go:334] "Generic (PLEG): container finished" podID="857f6040-72fa-4f10-97e4-2f4e4e2198a4" containerID="7be8cbecad2a7b5d36b3fe535a03fbe4c1458b9be28edae797f1f7695718a826" exitCode=0 Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.903283 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7rpqt" event={"ID":"857f6040-72fa-4f10-97e4-2f4e4e2198a4","Type":"ContainerDied","Data":"7be8cbecad2a7b5d36b3fe535a03fbe4c1458b9be28edae797f1f7695718a826"} Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.905894 4871 generic.go:334] "Generic (PLEG): container finished" podID="67d8b5ae-67a3-4ca7-b74e-35e5cc45d519" containerID="d725a5cc9aab785538d605610d3d31910b8113a6b5f21cb9a9be38b03d466c6c" exitCode=0 Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.905942 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxfdt" event={"ID":"67d8b5ae-67a3-4ca7-b74e-35e5cc45d519","Type":"ContainerDied","Data":"d725a5cc9aab785538d605610d3d31910b8113a6b5f21cb9a9be38b03d466c6c"} Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.909304 4871 generic.go:334] "Generic (PLEG): container finished" podID="f7296424-ed41-429c-8e45-599795442f1d" containerID="fe41996cf20c26455cf348915e1d6b0841af54a9ce03d297beb23d64b0a931a7" exitCode=0 Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.909365 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ldxz9" Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.909374 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldxz9" event={"ID":"f7296424-ed41-429c-8e45-599795442f1d","Type":"ContainerDied","Data":"fe41996cf20c26455cf348915e1d6b0841af54a9ce03d297beb23d64b0a931a7"} Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.909404 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldxz9" event={"ID":"f7296424-ed41-429c-8e45-599795442f1d","Type":"ContainerDied","Data":"72477bb9774747e694692621ecbeabc2d01332c7256d4f633de28d66a28c4bb6"} Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.909422 4871 scope.go:117] "RemoveContainer" containerID="fe41996cf20c26455cf348915e1d6b0841af54a9ce03d297beb23d64b0a931a7" Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.912664 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdhpp" event={"ID":"9524a883-d083-471d-8c30-866172b8456e","Type":"ContainerStarted","Data":"084ed470cb799b27a8f7973a6e34c789f91540e741c821fcabca71b27d30ddb4"} Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.926113 4871 scope.go:117] "RemoveContainer" containerID="15fe145a1bcd76556cf22aa2a2de710ddeceb20898131088a82ef555aa8afebb" Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.945438 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ldxz9"] Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.949037 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t2stq" Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.949817 4871 scope.go:117] "RemoveContainer" containerID="3d2b68b5505f01ededb49f4326f49a54d4bdb1d27eaf1038fee7d72f051076cb" Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.950968 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ldxz9"] Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.971646 4871 scope.go:117] "RemoveContainer" containerID="fe41996cf20c26455cf348915e1d6b0841af54a9ce03d297beb23d64b0a931a7" Jan 28 15:21:01 crc kubenswrapper[4871]: E0128 15:21:01.973315 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe41996cf20c26455cf348915e1d6b0841af54a9ce03d297beb23d64b0a931a7\": container with ID starting with fe41996cf20c26455cf348915e1d6b0841af54a9ce03d297beb23d64b0a931a7 not found: ID does not exist" containerID="fe41996cf20c26455cf348915e1d6b0841af54a9ce03d297beb23d64b0a931a7" Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.973345 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe41996cf20c26455cf348915e1d6b0841af54a9ce03d297beb23d64b0a931a7"} err="failed to get container status \"fe41996cf20c26455cf348915e1d6b0841af54a9ce03d297beb23d64b0a931a7\": rpc error: code = NotFound desc = could not find container \"fe41996cf20c26455cf348915e1d6b0841af54a9ce03d297beb23d64b0a931a7\": container with ID starting with fe41996cf20c26455cf348915e1d6b0841af54a9ce03d297beb23d64b0a931a7 not found: ID does not exist" Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.973387 4871 scope.go:117] "RemoveContainer" containerID="15fe145a1bcd76556cf22aa2a2de710ddeceb20898131088a82ef555aa8afebb" Jan 28 15:21:01 crc kubenswrapper[4871]: E0128 15:21:01.973728 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15fe145a1bcd76556cf22aa2a2de710ddeceb20898131088a82ef555aa8afebb\": container with ID starting with 15fe145a1bcd76556cf22aa2a2de710ddeceb20898131088a82ef555aa8afebb not found: ID does not exist" containerID="15fe145a1bcd76556cf22aa2a2de710ddeceb20898131088a82ef555aa8afebb" Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.973749 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15fe145a1bcd76556cf22aa2a2de710ddeceb20898131088a82ef555aa8afebb"} err="failed to get container status \"15fe145a1bcd76556cf22aa2a2de710ddeceb20898131088a82ef555aa8afebb\": rpc error: code = NotFound desc = could not find container \"15fe145a1bcd76556cf22aa2a2de710ddeceb20898131088a82ef555aa8afebb\": container with ID starting with 15fe145a1bcd76556cf22aa2a2de710ddeceb20898131088a82ef555aa8afebb not found: ID does not exist" Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.973765 4871 scope.go:117] "RemoveContainer" containerID="3d2b68b5505f01ededb49f4326f49a54d4bdb1d27eaf1038fee7d72f051076cb" Jan 28 15:21:01 crc kubenswrapper[4871]: E0128 15:21:01.974250 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d2b68b5505f01ededb49f4326f49a54d4bdb1d27eaf1038fee7d72f051076cb\": container with ID starting with 3d2b68b5505f01ededb49f4326f49a54d4bdb1d27eaf1038fee7d72f051076cb not found: ID does not exist" containerID="3d2b68b5505f01ededb49f4326f49a54d4bdb1d27eaf1038fee7d72f051076cb" Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.974270 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d2b68b5505f01ededb49f4326f49a54d4bdb1d27eaf1038fee7d72f051076cb"} err="failed to get container status \"3d2b68b5505f01ededb49f4326f49a54d4bdb1d27eaf1038fee7d72f051076cb\": rpc error: code = NotFound desc = could not find container \"3d2b68b5505f01ededb49f4326f49a54d4bdb1d27eaf1038fee7d72f051076cb\": container with ID starting with 3d2b68b5505f01ededb49f4326f49a54d4bdb1d27eaf1038fee7d72f051076cb not found: ID does not exist" Jan 28 15:21:01 crc kubenswrapper[4871]: I0128 15:21:01.978313 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xdhpp" podStartSLOduration=3.902878725 podStartE2EDuration="1m1.978303975s" podCreationTimestamp="2026-01-28 15:20:00 +0000 UTC" firstStartedPulling="2026-01-28 15:20:02.975689569 +0000 UTC m=+154.871527891" lastFinishedPulling="2026-01-28 15:21:01.051114809 +0000 UTC m=+212.946953141" observedRunningTime="2026-01-28 15:21:01.97598288 +0000 UTC m=+213.871821202" watchObservedRunningTime="2026-01-28 15:21:01.978303975 +0000 UTC m=+213.874142297" Jan 28 15:21:02 crc kubenswrapper[4871]: I0128 15:21:02.917419 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7296424-ed41-429c-8e45-599795442f1d" path="/var/lib/kubelet/pods/f7296424-ed41-429c-8e45-599795442f1d/volumes" Jan 28 15:21:02 crc kubenswrapper[4871]: I0128 15:21:02.928352 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7rpqt" event={"ID":"857f6040-72fa-4f10-97e4-2f4e4e2198a4","Type":"ContainerStarted","Data":"11b365b24310e72a14f462e3e668def1d13f345505ced2b1c75ca5a7253dad87"} Jan 28 15:21:02 crc kubenswrapper[4871]: I0128 15:21:02.949001 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7rpqt" podStartSLOduration=2.517788981 podStartE2EDuration="1m3.948983893s" podCreationTimestamp="2026-01-28 15:19:59 +0000 UTC" firstStartedPulling="2026-01-28 15:20:00.904231635 +0000 UTC m=+152.800069957" lastFinishedPulling="2026-01-28 15:21:02.335426547 +0000 UTC m=+214.231264869" observedRunningTime="2026-01-28 15:21:02.944107905 +0000 UTC m=+214.839946227" watchObservedRunningTime="2026-01-28 15:21:02.948983893 +0000 UTC m=+214.844822215" Jan 28 15:21:03 crc kubenswrapper[4871]: I0128 15:21:03.936950 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxfdt" event={"ID":"67d8b5ae-67a3-4ca7-b74e-35e5cc45d519","Type":"ContainerStarted","Data":"ae8af4dafecf2b7de7c43d465a2291d075d0bdd58ff4523642ba58f292dc755f"} Jan 28 15:21:03 crc kubenswrapper[4871]: I0128 15:21:03.966969 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pxfdt" podStartSLOduration=4.04517343 podStartE2EDuration="1m6.966952376s" podCreationTimestamp="2026-01-28 15:19:57 +0000 UTC" firstStartedPulling="2026-01-28 15:19:59.855338719 +0000 UTC m=+151.751177041" lastFinishedPulling="2026-01-28 15:21:02.777117625 +0000 UTC m=+214.672955987" observedRunningTime="2026-01-28 15:21:03.962657156 +0000 UTC m=+215.858495488" watchObservedRunningTime="2026-01-28 15:21:03.966952376 +0000 UTC m=+215.862790698" Jan 28 15:21:04 crc kubenswrapper[4871]: I0128 15:21:04.134448 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t2stq"] Jan 28 15:21:04 crc kubenswrapper[4871]: I0128 15:21:04.134846 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t2stq" podUID="d9695cd9-dbde-4846-a087-7224a1ece561" containerName="registry-server" containerID="cri-o://097d9c58737912189db014b7f1878135a50e3aa42be87727aaa0391ca05ad154" gracePeriod=2 Jan 28 15:21:04 crc kubenswrapper[4871]: I0128 15:21:04.956952 4871 generic.go:334] "Generic (PLEG): container finished" podID="d9695cd9-dbde-4846-a087-7224a1ece561" containerID="097d9c58737912189db014b7f1878135a50e3aa42be87727aaa0391ca05ad154" exitCode=0 Jan 28 15:21:04 crc kubenswrapper[4871]: I0128 15:21:04.957102 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2stq" event={"ID":"d9695cd9-dbde-4846-a087-7224a1ece561","Type":"ContainerDied","Data":"097d9c58737912189db014b7f1878135a50e3aa42be87727aaa0391ca05ad154"} Jan 28 15:21:05 crc kubenswrapper[4871]: I0128 15:21:05.139872 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2stq" Jan 28 15:21:05 crc kubenswrapper[4871]: I0128 15:21:05.179916 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmt8w\" (UniqueName: \"kubernetes.io/projected/d9695cd9-dbde-4846-a087-7224a1ece561-kube-api-access-xmt8w\") pod \"d9695cd9-dbde-4846-a087-7224a1ece561\" (UID: \"d9695cd9-dbde-4846-a087-7224a1ece561\") " Jan 28 15:21:05 crc kubenswrapper[4871]: I0128 15:21:05.180049 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9695cd9-dbde-4846-a087-7224a1ece561-utilities\") pod \"d9695cd9-dbde-4846-a087-7224a1ece561\" (UID: \"d9695cd9-dbde-4846-a087-7224a1ece561\") " Jan 28 15:21:05 crc kubenswrapper[4871]: I0128 15:21:05.180145 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9695cd9-dbde-4846-a087-7224a1ece561-catalog-content\") pod \"d9695cd9-dbde-4846-a087-7224a1ece561\" (UID: \"d9695cd9-dbde-4846-a087-7224a1ece561\") " Jan 28 15:21:05 crc kubenswrapper[4871]: I0128 15:21:05.181450 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9695cd9-dbde-4846-a087-7224a1ece561-utilities" (OuterVolumeSpecName: "utilities") pod "d9695cd9-dbde-4846-a087-7224a1ece561" (UID: "d9695cd9-dbde-4846-a087-7224a1ece561"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:21:05 crc kubenswrapper[4871]: I0128 15:21:05.186559 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9695cd9-dbde-4846-a087-7224a1ece561-kube-api-access-xmt8w" (OuterVolumeSpecName: "kube-api-access-xmt8w") pod "d9695cd9-dbde-4846-a087-7224a1ece561" (UID: "d9695cd9-dbde-4846-a087-7224a1ece561"). InnerVolumeSpecName "kube-api-access-xmt8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:05 crc kubenswrapper[4871]: I0128 15:21:05.282204 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmt8w\" (UniqueName: \"kubernetes.io/projected/d9695cd9-dbde-4846-a087-7224a1ece561-kube-api-access-xmt8w\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:05 crc kubenswrapper[4871]: I0128 15:21:05.282278 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9695cd9-dbde-4846-a087-7224a1ece561-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:05 crc kubenswrapper[4871]: I0128 15:21:05.474553 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9695cd9-dbde-4846-a087-7224a1ece561-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9695cd9-dbde-4846-a087-7224a1ece561" (UID: "d9695cd9-dbde-4846-a087-7224a1ece561"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:21:05 crc kubenswrapper[4871]: I0128 15:21:05.484339 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9695cd9-dbde-4846-a087-7224a1ece561-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:05 crc kubenswrapper[4871]: I0128 15:21:05.966322 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2stq" event={"ID":"d9695cd9-dbde-4846-a087-7224a1ece561","Type":"ContainerDied","Data":"49f7af796eb9978c134f9b7aae1083ca7bcedec5a419831b9e6cd88f7297da9e"} Jan 28 15:21:05 crc kubenswrapper[4871]: I0128 15:21:05.966389 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2stq" Jan 28 15:21:05 crc kubenswrapper[4871]: I0128 15:21:05.966397 4871 scope.go:117] "RemoveContainer" containerID="097d9c58737912189db014b7f1878135a50e3aa42be87727aaa0391ca05ad154" Jan 28 15:21:05 crc kubenswrapper[4871]: I0128 15:21:05.985096 4871 scope.go:117] "RemoveContainer" containerID="af2d24639943762f38449f2f782ee1fec05176b7a7236ce090f15b1c48a8a916" Jan 28 15:21:06 crc kubenswrapper[4871]: I0128 15:21:06.000745 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t2stq"] Jan 28 15:21:06 crc kubenswrapper[4871]: I0128 15:21:06.001567 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t2stq"] Jan 28 15:21:06 crc kubenswrapper[4871]: I0128 15:21:06.017230 4871 scope.go:117] "RemoveContainer" containerID="e4d794b793e0c32c9ccf07cc943f01f1c6872769b93868c39e78da8849e46976" Jan 28 15:21:06 crc kubenswrapper[4871]: I0128 15:21:06.911516 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9695cd9-dbde-4846-a087-7224a1ece561" path="/var/lib/kubelet/pods/d9695cd9-dbde-4846-a087-7224a1ece561/volumes" Jan 28 15:21:08 crc kubenswrapper[4871]: I0128 15:21:08.204886 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pxfdt" Jan 28 15:21:08 crc kubenswrapper[4871]: I0128 15:21:08.205018 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pxfdt" Jan 28 15:21:08 crc kubenswrapper[4871]: I0128 15:21:08.279487 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pxfdt" Jan 28 15:21:09 crc kubenswrapper[4871]: I0128 15:21:08.999829 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dzwqq"] Jan 28 15:21:09 crc kubenswrapper[4871]: I0128 15:21:09.058236 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pxfdt" Jan 28 15:21:09 crc kubenswrapper[4871]: I0128 15:21:09.538482 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pxfdt"] Jan 28 15:21:10 crc kubenswrapper[4871]: I0128 15:21:10.150647 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7rpqt" Jan 28 15:21:10 crc kubenswrapper[4871]: I0128 15:21:10.150732 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7rpqt" Jan 28 15:21:10 crc kubenswrapper[4871]: I0128 15:21:10.235736 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7rpqt" Jan 28 15:21:10 crc kubenswrapper[4871]: I0128 15:21:10.961601 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xdhpp" Jan 28 15:21:10 crc kubenswrapper[4871]: I0128 15:21:10.961649 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xdhpp" Jan 28 15:21:11 crc kubenswrapper[4871]: I0128 15:21:11.008287 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pxfdt" podUID="67d8b5ae-67a3-4ca7-b74e-35e5cc45d519" containerName="registry-server" containerID="cri-o://ae8af4dafecf2b7de7c43d465a2291d075d0bdd58ff4523642ba58f292dc755f" gracePeriod=2 Jan 28 15:21:11 crc kubenswrapper[4871]: I0128 15:21:11.016961 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xdhpp" Jan 28 15:21:11 crc kubenswrapper[4871]: I0128 15:21:11.058233 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7rpqt" Jan 28 15:21:11 crc kubenswrapper[4871]: I0128 15:21:11.064622 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xdhpp" Jan 28 15:21:11 crc kubenswrapper[4871]: E0128 15:21:11.114042 4871 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67d8b5ae_67a3_4ca7_b74e_35e5cc45d519.slice/crio-conmon-ae8af4dafecf2b7de7c43d465a2291d075d0bdd58ff4523642ba58f292dc755f.scope\": RecentStats: unable to find data in memory cache]" Jan 28 15:21:11 crc kubenswrapper[4871]: I0128 15:21:11.407420 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxfdt" Jan 28 15:21:11 crc kubenswrapper[4871]: I0128 15:21:11.562051 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67d8b5ae-67a3-4ca7-b74e-35e5cc45d519-utilities\") pod \"67d8b5ae-67a3-4ca7-b74e-35e5cc45d519\" (UID: \"67d8b5ae-67a3-4ca7-b74e-35e5cc45d519\") " Jan 28 15:21:11 crc kubenswrapper[4871]: I0128 15:21:11.562185 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28fjt\" (UniqueName: \"kubernetes.io/projected/67d8b5ae-67a3-4ca7-b74e-35e5cc45d519-kube-api-access-28fjt\") pod \"67d8b5ae-67a3-4ca7-b74e-35e5cc45d519\" (UID: \"67d8b5ae-67a3-4ca7-b74e-35e5cc45d519\") " Jan 28 15:21:11 crc kubenswrapper[4871]: I0128 15:21:11.562212 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67d8b5ae-67a3-4ca7-b74e-35e5cc45d519-catalog-content\") pod \"67d8b5ae-67a3-4ca7-b74e-35e5cc45d519\" (UID: \"67d8b5ae-67a3-4ca7-b74e-35e5cc45d519\") " Jan 28 15:21:11 crc kubenswrapper[4871]: I0128 15:21:11.563320 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67d8b5ae-67a3-4ca7-b74e-35e5cc45d519-utilities" (OuterVolumeSpecName: "utilities") pod "67d8b5ae-67a3-4ca7-b74e-35e5cc45d519" (UID: "67d8b5ae-67a3-4ca7-b74e-35e5cc45d519"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:21:11 crc kubenswrapper[4871]: I0128 15:21:11.569927 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67d8b5ae-67a3-4ca7-b74e-35e5cc45d519-kube-api-access-28fjt" (OuterVolumeSpecName: "kube-api-access-28fjt") pod "67d8b5ae-67a3-4ca7-b74e-35e5cc45d519" (UID: "67d8b5ae-67a3-4ca7-b74e-35e5cc45d519"). InnerVolumeSpecName "kube-api-access-28fjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:11 crc kubenswrapper[4871]: I0128 15:21:11.611908 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67d8b5ae-67a3-4ca7-b74e-35e5cc45d519-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67d8b5ae-67a3-4ca7-b74e-35e5cc45d519" (UID: "67d8b5ae-67a3-4ca7-b74e-35e5cc45d519"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:21:11 crc kubenswrapper[4871]: I0128 15:21:11.663753 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28fjt\" (UniqueName: \"kubernetes.io/projected/67d8b5ae-67a3-4ca7-b74e-35e5cc45d519-kube-api-access-28fjt\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:11 crc kubenswrapper[4871]: I0128 15:21:11.663795 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67d8b5ae-67a3-4ca7-b74e-35e5cc45d519-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:11 crc kubenswrapper[4871]: I0128 15:21:11.663809 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67d8b5ae-67a3-4ca7-b74e-35e5cc45d519-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:12 crc kubenswrapper[4871]: I0128 15:21:12.015768 4871 generic.go:334] "Generic (PLEG): container finished" podID="67d8b5ae-67a3-4ca7-b74e-35e5cc45d519" containerID="ae8af4dafecf2b7de7c43d465a2291d075d0bdd58ff4523642ba58f292dc755f" exitCode=0 Jan 28 15:21:12 crc kubenswrapper[4871]: I0128 15:21:12.015807 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxfdt" event={"ID":"67d8b5ae-67a3-4ca7-b74e-35e5cc45d519","Type":"ContainerDied","Data":"ae8af4dafecf2b7de7c43d465a2291d075d0bdd58ff4523642ba58f292dc755f"} Jan 28 15:21:12 crc kubenswrapper[4871]: I0128 15:21:12.015844 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxfdt" event={"ID":"67d8b5ae-67a3-4ca7-b74e-35e5cc45d519","Type":"ContainerDied","Data":"7d47860d750dc6ebf0328f70cd54381dfbab107c657b6bb131991ff6ba0a0928"} Jan 28 15:21:12 crc kubenswrapper[4871]: I0128 15:21:12.015870 4871 scope.go:117] "RemoveContainer" containerID="ae8af4dafecf2b7de7c43d465a2291d075d0bdd58ff4523642ba58f292dc755f" Jan 28 15:21:12 crc kubenswrapper[4871]: I0128 15:21:12.015933 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxfdt" Jan 28 15:21:12 crc kubenswrapper[4871]: I0128 15:21:12.032825 4871 scope.go:117] "RemoveContainer" containerID="d725a5cc9aab785538d605610d3d31910b8113a6b5f21cb9a9be38b03d466c6c" Jan 28 15:21:12 crc kubenswrapper[4871]: I0128 15:21:12.056961 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pxfdt"] Jan 28 15:21:12 crc kubenswrapper[4871]: I0128 15:21:12.062062 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pxfdt"] Jan 28 15:21:12 crc kubenswrapper[4871]: I0128 15:21:12.069814 4871 scope.go:117] "RemoveContainer" containerID="4e79bc6ab1e19842e5d4f9023f17a5ea72d9012b2a0288747ee09d4ed01afb74" Jan 28 15:21:12 crc kubenswrapper[4871]: I0128 15:21:12.100789 4871 scope.go:117] "RemoveContainer" containerID="ae8af4dafecf2b7de7c43d465a2291d075d0bdd58ff4523642ba58f292dc755f" Jan 28 15:21:12 crc kubenswrapper[4871]: E0128 15:21:12.104738 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae8af4dafecf2b7de7c43d465a2291d075d0bdd58ff4523642ba58f292dc755f\": container with ID starting with ae8af4dafecf2b7de7c43d465a2291d075d0bdd58ff4523642ba58f292dc755f not found: ID does not exist" containerID="ae8af4dafecf2b7de7c43d465a2291d075d0bdd58ff4523642ba58f292dc755f" Jan 28 15:21:12 crc kubenswrapper[4871]: I0128 15:21:12.104780 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae8af4dafecf2b7de7c43d465a2291d075d0bdd58ff4523642ba58f292dc755f"} err="failed to get container status \"ae8af4dafecf2b7de7c43d465a2291d075d0bdd58ff4523642ba58f292dc755f\": rpc error: code = NotFound desc = could not find container \"ae8af4dafecf2b7de7c43d465a2291d075d0bdd58ff4523642ba58f292dc755f\": container with ID starting with ae8af4dafecf2b7de7c43d465a2291d075d0bdd58ff4523642ba58f292dc755f not found: ID does not exist" Jan 28 15:21:12 crc kubenswrapper[4871]: I0128 15:21:12.104806 4871 scope.go:117] "RemoveContainer" containerID="d725a5cc9aab785538d605610d3d31910b8113a6b5f21cb9a9be38b03d466c6c" Jan 28 15:21:12 crc kubenswrapper[4871]: E0128 15:21:12.106910 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d725a5cc9aab785538d605610d3d31910b8113a6b5f21cb9a9be38b03d466c6c\": container with ID starting with d725a5cc9aab785538d605610d3d31910b8113a6b5f21cb9a9be38b03d466c6c not found: ID does not exist" containerID="d725a5cc9aab785538d605610d3d31910b8113a6b5f21cb9a9be38b03d466c6c" Jan 28 15:21:12 crc kubenswrapper[4871]: I0128 15:21:12.106980 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d725a5cc9aab785538d605610d3d31910b8113a6b5f21cb9a9be38b03d466c6c"} err="failed to get container status \"d725a5cc9aab785538d605610d3d31910b8113a6b5f21cb9a9be38b03d466c6c\": rpc error: code = NotFound desc = could not find container \"d725a5cc9aab785538d605610d3d31910b8113a6b5f21cb9a9be38b03d466c6c\": container with ID starting with d725a5cc9aab785538d605610d3d31910b8113a6b5f21cb9a9be38b03d466c6c not found: ID does not exist" Jan 28 15:21:12 crc kubenswrapper[4871]: I0128 15:21:12.107022 4871 scope.go:117] "RemoveContainer" containerID="4e79bc6ab1e19842e5d4f9023f17a5ea72d9012b2a0288747ee09d4ed01afb74" Jan 28 15:21:12 crc kubenswrapper[4871]: E0128 15:21:12.107645 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e79bc6ab1e19842e5d4f9023f17a5ea72d9012b2a0288747ee09d4ed01afb74\": container with ID starting with 4e79bc6ab1e19842e5d4f9023f17a5ea72d9012b2a0288747ee09d4ed01afb74 not found: ID does not exist" containerID="4e79bc6ab1e19842e5d4f9023f17a5ea72d9012b2a0288747ee09d4ed01afb74" Jan 28 15:21:12 crc kubenswrapper[4871]: I0128 15:21:12.107747 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e79bc6ab1e19842e5d4f9023f17a5ea72d9012b2a0288747ee09d4ed01afb74"} err="failed to get container status \"4e79bc6ab1e19842e5d4f9023f17a5ea72d9012b2a0288747ee09d4ed01afb74\": rpc error: code = NotFound desc = could not find container \"4e79bc6ab1e19842e5d4f9023f17a5ea72d9012b2a0288747ee09d4ed01afb74\": container with ID starting with 4e79bc6ab1e19842e5d4f9023f17a5ea72d9012b2a0288747ee09d4ed01afb74 not found: ID does not exist" Jan 28 15:21:12 crc kubenswrapper[4871]: I0128 15:21:12.537566 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7rpqt"] Jan 28 15:21:12 crc kubenswrapper[4871]: I0128 15:21:12.916519 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67d8b5ae-67a3-4ca7-b74e-35e5cc45d519" path="/var/lib/kubelet/pods/67d8b5ae-67a3-4ca7-b74e-35e5cc45d519/volumes" Jan 28 15:21:13 crc kubenswrapper[4871]: I0128 15:21:13.024231 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7rpqt" podUID="857f6040-72fa-4f10-97e4-2f4e4e2198a4" containerName="registry-server" containerID="cri-o://11b365b24310e72a14f462e3e668def1d13f345505ced2b1c75ca5a7253dad87" gracePeriod=2 Jan 28 15:21:13 crc kubenswrapper[4871]: I0128 15:21:13.470806 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7rpqt" Jan 28 15:21:13 crc kubenswrapper[4871]: I0128 15:21:13.588691 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/857f6040-72fa-4f10-97e4-2f4e4e2198a4-catalog-content\") pod \"857f6040-72fa-4f10-97e4-2f4e4e2198a4\" (UID: \"857f6040-72fa-4f10-97e4-2f4e4e2198a4\") " Jan 28 15:21:13 crc kubenswrapper[4871]: I0128 15:21:13.588765 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/857f6040-72fa-4f10-97e4-2f4e4e2198a4-utilities\") pod \"857f6040-72fa-4f10-97e4-2f4e4e2198a4\" (UID: \"857f6040-72fa-4f10-97e4-2f4e4e2198a4\") " Jan 28 15:21:13 crc kubenswrapper[4871]: I0128 15:21:13.588826 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpwh2\" (UniqueName: \"kubernetes.io/projected/857f6040-72fa-4f10-97e4-2f4e4e2198a4-kube-api-access-cpwh2\") pod \"857f6040-72fa-4f10-97e4-2f4e4e2198a4\" (UID: \"857f6040-72fa-4f10-97e4-2f4e4e2198a4\") " Jan 28 15:21:13 crc kubenswrapper[4871]: I0128 15:21:13.589494 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/857f6040-72fa-4f10-97e4-2f4e4e2198a4-utilities" (OuterVolumeSpecName: "utilities") pod "857f6040-72fa-4f10-97e4-2f4e4e2198a4" (UID: "857f6040-72fa-4f10-97e4-2f4e4e2198a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:21:13 crc kubenswrapper[4871]: I0128 15:21:13.592533 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/857f6040-72fa-4f10-97e4-2f4e4e2198a4-kube-api-access-cpwh2" (OuterVolumeSpecName: "kube-api-access-cpwh2") pod "857f6040-72fa-4f10-97e4-2f4e4e2198a4" (UID: "857f6040-72fa-4f10-97e4-2f4e4e2198a4"). InnerVolumeSpecName "kube-api-access-cpwh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:13 crc kubenswrapper[4871]: I0128 15:21:13.614240 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/857f6040-72fa-4f10-97e4-2f4e4e2198a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "857f6040-72fa-4f10-97e4-2f4e4e2198a4" (UID: "857f6040-72fa-4f10-97e4-2f4e4e2198a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:21:13 crc kubenswrapper[4871]: I0128 15:21:13.690266 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/857f6040-72fa-4f10-97e4-2f4e4e2198a4-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:13 crc kubenswrapper[4871]: I0128 15:21:13.690333 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpwh2\" (UniqueName: \"kubernetes.io/projected/857f6040-72fa-4f10-97e4-2f4e4e2198a4-kube-api-access-cpwh2\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:13 crc kubenswrapper[4871]: I0128 15:21:13.690363 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/857f6040-72fa-4f10-97e4-2f4e4e2198a4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:13 crc kubenswrapper[4871]: I0128 15:21:13.814304 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:21:13 crc kubenswrapper[4871]: I0128 15:21:13.814409 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:21:13 crc kubenswrapper[4871]: I0128 15:21:13.814485 4871 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 15:21:13 crc kubenswrapper[4871]: I0128 15:21:13.815491 4871 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea"} pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:21:13 crc kubenswrapper[4871]: I0128 15:21:13.815638 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" containerID="cri-o://bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea" gracePeriod=600 Jan 28 15:21:14 crc kubenswrapper[4871]: I0128 15:21:14.036708 4871 generic.go:334] "Generic (PLEG): container finished" podID="857f6040-72fa-4f10-97e4-2f4e4e2198a4" containerID="11b365b24310e72a14f462e3e668def1d13f345505ced2b1c75ca5a7253dad87" exitCode=0 Jan 28 15:21:14 crc kubenswrapper[4871]: I0128 15:21:14.036803 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7rpqt" Jan 28 15:21:14 crc kubenswrapper[4871]: I0128 15:21:14.036778 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7rpqt" event={"ID":"857f6040-72fa-4f10-97e4-2f4e4e2198a4","Type":"ContainerDied","Data":"11b365b24310e72a14f462e3e668def1d13f345505ced2b1c75ca5a7253dad87"} Jan 28 15:21:14 crc kubenswrapper[4871]: I0128 15:21:14.036932 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7rpqt" event={"ID":"857f6040-72fa-4f10-97e4-2f4e4e2198a4","Type":"ContainerDied","Data":"004e6c1e2e3778dee5aa170d6418a4ad6fe7e684231e186db973cf6ae400835d"} Jan 28 15:21:14 crc kubenswrapper[4871]: I0128 15:21:14.037024 4871 scope.go:117] "RemoveContainer" containerID="11b365b24310e72a14f462e3e668def1d13f345505ced2b1c75ca5a7253dad87" Jan 28 15:21:14 crc kubenswrapper[4871]: I0128 15:21:14.042286 4871 generic.go:334] "Generic (PLEG): container finished" podID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerID="bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea" exitCode=0 Jan 28 15:21:14 crc kubenswrapper[4871]: I0128 15:21:14.042372 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerDied","Data":"bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea"} Jan 28 15:21:14 crc kubenswrapper[4871]: I0128 15:21:14.084902 4871 scope.go:117] "RemoveContainer" containerID="7be8cbecad2a7b5d36b3fe535a03fbe4c1458b9be28edae797f1f7695718a826" Jan 28 15:21:14 crc kubenswrapper[4871]: I0128 15:21:14.105294 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7rpqt"] Jan 28 15:21:14 crc kubenswrapper[4871]: I0128 15:21:14.110462 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7rpqt"] Jan 28 15:21:14 crc kubenswrapper[4871]: I0128 15:21:14.127918 4871 scope.go:117] "RemoveContainer" containerID="808893a25855ee26304717f1dcc08f26507c97b1dc0edcdae86a401c5ba8b067" Jan 28 15:21:14 crc kubenswrapper[4871]: I0128 15:21:14.149717 4871 scope.go:117] "RemoveContainer" containerID="11b365b24310e72a14f462e3e668def1d13f345505ced2b1c75ca5a7253dad87" Jan 28 15:21:14 crc kubenswrapper[4871]: E0128 15:21:14.150237 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11b365b24310e72a14f462e3e668def1d13f345505ced2b1c75ca5a7253dad87\": container with ID starting with 11b365b24310e72a14f462e3e668def1d13f345505ced2b1c75ca5a7253dad87 not found: ID does not exist" containerID="11b365b24310e72a14f462e3e668def1d13f345505ced2b1c75ca5a7253dad87" Jan 28 15:21:14 crc kubenswrapper[4871]: I0128 15:21:14.150280 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11b365b24310e72a14f462e3e668def1d13f345505ced2b1c75ca5a7253dad87"} err="failed to get container status \"11b365b24310e72a14f462e3e668def1d13f345505ced2b1c75ca5a7253dad87\": rpc error: code = NotFound desc = could not find container \"11b365b24310e72a14f462e3e668def1d13f345505ced2b1c75ca5a7253dad87\": container with ID starting with 11b365b24310e72a14f462e3e668def1d13f345505ced2b1c75ca5a7253dad87 not found: ID does not exist" Jan 28 15:21:14 crc kubenswrapper[4871]: I0128 15:21:14.150317 4871 scope.go:117] "RemoveContainer" containerID="7be8cbecad2a7b5d36b3fe535a03fbe4c1458b9be28edae797f1f7695718a826" Jan 28 15:21:14 crc kubenswrapper[4871]: E0128 15:21:14.150708 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7be8cbecad2a7b5d36b3fe535a03fbe4c1458b9be28edae797f1f7695718a826\": container with ID starting with 7be8cbecad2a7b5d36b3fe535a03fbe4c1458b9be28edae797f1f7695718a826 not found: ID does not exist" containerID="7be8cbecad2a7b5d36b3fe535a03fbe4c1458b9be28edae797f1f7695718a826" Jan 28 15:21:14 crc kubenswrapper[4871]: I0128 15:21:14.150758 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7be8cbecad2a7b5d36b3fe535a03fbe4c1458b9be28edae797f1f7695718a826"} err="failed to get container status \"7be8cbecad2a7b5d36b3fe535a03fbe4c1458b9be28edae797f1f7695718a826\": rpc error: code = NotFound desc = could not find container \"7be8cbecad2a7b5d36b3fe535a03fbe4c1458b9be28edae797f1f7695718a826\": container with ID starting with 7be8cbecad2a7b5d36b3fe535a03fbe4c1458b9be28edae797f1f7695718a826 not found: ID does not exist" Jan 28 15:21:14 crc kubenswrapper[4871]: I0128 15:21:14.150794 4871 scope.go:117] "RemoveContainer" containerID="808893a25855ee26304717f1dcc08f26507c97b1dc0edcdae86a401c5ba8b067" Jan 28 15:21:14 crc kubenswrapper[4871]: E0128 15:21:14.152243 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"808893a25855ee26304717f1dcc08f26507c97b1dc0edcdae86a401c5ba8b067\": container with ID starting with 808893a25855ee26304717f1dcc08f26507c97b1dc0edcdae86a401c5ba8b067 not found: ID does not exist" containerID="808893a25855ee26304717f1dcc08f26507c97b1dc0edcdae86a401c5ba8b067" Jan 28 15:21:14 crc kubenswrapper[4871]: I0128 15:21:14.152279 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"808893a25855ee26304717f1dcc08f26507c97b1dc0edcdae86a401c5ba8b067"} err="failed to get container status \"808893a25855ee26304717f1dcc08f26507c97b1dc0edcdae86a401c5ba8b067\": rpc error: code = NotFound desc = could not find container \"808893a25855ee26304717f1dcc08f26507c97b1dc0edcdae86a401c5ba8b067\": container with ID starting with 808893a25855ee26304717f1dcc08f26507c97b1dc0edcdae86a401c5ba8b067 not found: ID does not exist" Jan 28 15:21:14 crc kubenswrapper[4871]: I0128 15:21:14.911127 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="857f6040-72fa-4f10-97e4-2f4e4e2198a4" path="/var/lib/kubelet/pods/857f6040-72fa-4f10-97e4-2f4e4e2198a4/volumes" Jan 28 15:21:15 crc kubenswrapper[4871]: I0128 15:21:15.049713 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerStarted","Data":"6d2a91b27216ac0ed31be3b15ce348b7ccc9fb4adf015bf16e3f9e5058afd244"} Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.347735 4871 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 15:21:22 crc kubenswrapper[4871]: E0128 15:21:22.348619 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67d8b5ae-67a3-4ca7-b74e-35e5cc45d519" containerName="extract-utilities" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.348635 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="67d8b5ae-67a3-4ca7-b74e-35e5cc45d519" containerName="extract-utilities" Jan 28 15:21:22 crc kubenswrapper[4871]: E0128 15:21:22.348645 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7296424-ed41-429c-8e45-599795442f1d" containerName="extract-content" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.348655 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7296424-ed41-429c-8e45-599795442f1d" containerName="extract-content" Jan 28 15:21:22 crc kubenswrapper[4871]: E0128 15:21:22.348669 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7296424-ed41-429c-8e45-599795442f1d" containerName="extract-utilities" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.348678 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7296424-ed41-429c-8e45-599795442f1d" containerName="extract-utilities" Jan 28 15:21:22 crc kubenswrapper[4871]: E0128 15:21:22.348690 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7296424-ed41-429c-8e45-599795442f1d" containerName="registry-server" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.348698 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7296424-ed41-429c-8e45-599795442f1d" containerName="registry-server" Jan 28 15:21:22 crc kubenswrapper[4871]: E0128 15:21:22.348707 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="857f6040-72fa-4f10-97e4-2f4e4e2198a4" containerName="extract-content" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.348714 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="857f6040-72fa-4f10-97e4-2f4e4e2198a4" containerName="extract-content" Jan 28 15:21:22 crc kubenswrapper[4871]: E0128 15:21:22.348728 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67d8b5ae-67a3-4ca7-b74e-35e5cc45d519" containerName="registry-server" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.348736 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="67d8b5ae-67a3-4ca7-b74e-35e5cc45d519" containerName="registry-server" Jan 28 15:21:22 crc kubenswrapper[4871]: E0128 15:21:22.348745 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67d8b5ae-67a3-4ca7-b74e-35e5cc45d519" containerName="extract-content" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.348752 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="67d8b5ae-67a3-4ca7-b74e-35e5cc45d519" containerName="extract-content" Jan 28 15:21:22 crc kubenswrapper[4871]: E0128 15:21:22.348762 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9695cd9-dbde-4846-a087-7224a1ece561" containerName="extract-content" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.348772 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9695cd9-dbde-4846-a087-7224a1ece561" containerName="extract-content" Jan 28 15:21:22 crc kubenswrapper[4871]: E0128 15:21:22.348788 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9695cd9-dbde-4846-a087-7224a1ece561" containerName="extract-utilities" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.348796 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9695cd9-dbde-4846-a087-7224a1ece561" containerName="extract-utilities" Jan 28 15:21:22 crc kubenswrapper[4871]: E0128 15:21:22.348808 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="857f6040-72fa-4f10-97e4-2f4e4e2198a4" containerName="extract-utilities" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.348814 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="857f6040-72fa-4f10-97e4-2f4e4e2198a4" containerName="extract-utilities" Jan 28 15:21:22 crc kubenswrapper[4871]: E0128 15:21:22.348822 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9695cd9-dbde-4846-a087-7224a1ece561" containerName="registry-server" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.348829 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9695cd9-dbde-4846-a087-7224a1ece561" containerName="registry-server" Jan 28 15:21:22 crc kubenswrapper[4871]: E0128 15:21:22.348842 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="857f6040-72fa-4f10-97e4-2f4e4e2198a4" containerName="registry-server" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.348850 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="857f6040-72fa-4f10-97e4-2f4e4e2198a4" containerName="registry-server" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.348964 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7296424-ed41-429c-8e45-599795442f1d" containerName="registry-server" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.348979 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="857f6040-72fa-4f10-97e4-2f4e4e2198a4" containerName="registry-server" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.348995 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="67d8b5ae-67a3-4ca7-b74e-35e5cc45d519" containerName="registry-server" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.349006 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9695cd9-dbde-4846-a087-7224a1ece561" containerName="registry-server" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.349459 4871 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.349776 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572" gracePeriod=15 Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.349855 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f" gracePeriod=15 Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.349921 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0" gracePeriod=15 Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.349980 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806" gracePeriod=15 Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.349971 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b" gracePeriod=15 Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.349897 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.350634 4871 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:21:22 crc kubenswrapper[4871]: E0128 15:21:22.350784 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.350798 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 15:21:22 crc kubenswrapper[4871]: E0128 15:21:22.350810 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.350818 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:21:22 crc kubenswrapper[4871]: E0128 15:21:22.350826 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.350834 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 15:21:22 crc kubenswrapper[4871]: E0128 15:21:22.350848 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.350856 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 15:21:22 crc kubenswrapper[4871]: E0128 15:21:22.350867 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.350874 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:21:22 crc kubenswrapper[4871]: E0128 15:21:22.350883 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.350890 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 15:21:22 crc kubenswrapper[4871]: E0128 15:21:22.350904 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.350911 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.351029 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.351042 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.351055 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.351067 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.351081 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.351092 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 15:21:22 crc kubenswrapper[4871]: E0128 15:21:22.351216 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.351228 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.351344 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.419387 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.419465 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.419506 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.419537 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.419571 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.419612 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.419637 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.419663 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.521249 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.521312 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.521333 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.521351 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.521372 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.521404 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.521435 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.521434 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.521483 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.521498 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.521545 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.521459 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.521522 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.521520 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.521498 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:21:22 crc kubenswrapper[4871]: I0128 15:21:22.521543 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:21:23 crc kubenswrapper[4871]: I0128 15:21:23.102254 4871 generic.go:334] "Generic (PLEG): container finished" podID="eaebff6e-df1b-477a-8eda-dd86e54561b0" containerID="81889f26b8916c02429856a7195aa1f64a819070e01afd3c7abdda9ecdf6908c" exitCode=0 Jan 28 15:21:23 crc kubenswrapper[4871]: I0128 15:21:23.102373 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"eaebff6e-df1b-477a-8eda-dd86e54561b0","Type":"ContainerDied","Data":"81889f26b8916c02429856a7195aa1f64a819070e01afd3c7abdda9ecdf6908c"} Jan 28 15:21:23 crc kubenswrapper[4871]: I0128 15:21:23.103899 4871 status_manager.go:851] "Failed to get status for pod" podUID="eaebff6e-df1b-477a-8eda-dd86e54561b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:23 crc kubenswrapper[4871]: I0128 15:21:23.108308 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 15:21:23 crc kubenswrapper[4871]: I0128 15:21:23.110804 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:21:23 crc kubenswrapper[4871]: I0128 15:21:23.112438 4871 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b" exitCode=0 Jan 28 15:21:23 crc kubenswrapper[4871]: I0128 15:21:23.112571 4871 scope.go:117] "RemoveContainer" containerID="e9efdd07ea673a99c7a49ddc185569460d54257d7f4bb06970b584c90d2f64bb" Jan 28 15:21:23 crc kubenswrapper[4871]: I0128 15:21:23.112731 4871 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f" exitCode=0 Jan 28 15:21:23 crc kubenswrapper[4871]: I0128 15:21:23.112908 4871 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0" exitCode=0 Jan 28 15:21:23 crc kubenswrapper[4871]: I0128 15:21:23.112942 4871 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806" exitCode=2 Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.126969 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.536680 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.537518 4871 status_manager.go:851] "Failed to get status for pod" podUID="eaebff6e-df1b-477a-8eda-dd86e54561b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.649362 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eaebff6e-df1b-477a-8eda-dd86e54561b0-kubelet-dir\") pod \"eaebff6e-df1b-477a-8eda-dd86e54561b0\" (UID: \"eaebff6e-df1b-477a-8eda-dd86e54561b0\") " Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.649496 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eaebff6e-df1b-477a-8eda-dd86e54561b0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "eaebff6e-df1b-477a-8eda-dd86e54561b0" (UID: "eaebff6e-df1b-477a-8eda-dd86e54561b0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.649866 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eaebff6e-df1b-477a-8eda-dd86e54561b0-var-lock\") pod \"eaebff6e-df1b-477a-8eda-dd86e54561b0\" (UID: \"eaebff6e-df1b-477a-8eda-dd86e54561b0\") " Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.649914 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eaebff6e-df1b-477a-8eda-dd86e54561b0-var-lock" (OuterVolumeSpecName: "var-lock") pod "eaebff6e-df1b-477a-8eda-dd86e54561b0" (UID: "eaebff6e-df1b-477a-8eda-dd86e54561b0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.649976 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaebff6e-df1b-477a-8eda-dd86e54561b0-kube-api-access\") pod \"eaebff6e-df1b-477a-8eda-dd86e54561b0\" (UID: \"eaebff6e-df1b-477a-8eda-dd86e54561b0\") " Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.650391 4871 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eaebff6e-df1b-477a-8eda-dd86e54561b0-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.650407 4871 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eaebff6e-df1b-477a-8eda-dd86e54561b0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.657478 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaebff6e-df1b-477a-8eda-dd86e54561b0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "eaebff6e-df1b-477a-8eda-dd86e54561b0" (UID: "eaebff6e-df1b-477a-8eda-dd86e54561b0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.713951 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.714838 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.715442 4871 status_manager.go:851] "Failed to get status for pod" podUID="eaebff6e-df1b-477a-8eda-dd86e54561b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.716029 4871 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.751785 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaebff6e-df1b-477a-8eda-dd86e54561b0-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.852898 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.852966 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.853029 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.853084 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.853189 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.853216 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.853483 4871 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.853507 4871 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.853524 4871 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:24 crc kubenswrapper[4871]: I0128 15:21:24.914828 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.141003 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.142271 4871 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572" exitCode=0 Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.142471 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.142494 4871 scope.go:117] "RemoveContainer" containerID="b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.144396 4871 status_manager.go:851] "Failed to get status for pod" podUID="eaebff6e-df1b-477a-8eda-dd86e54561b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.144956 4871 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.145566 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"eaebff6e-df1b-477a-8eda-dd86e54561b0","Type":"ContainerDied","Data":"d694ae1c5b6a7e692247bee7ed2420af2ebdc54e420e85c84ead0db3b8c18661"} Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.145811 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d694ae1c5b6a7e692247bee7ed2420af2ebdc54e420e85c84ead0db3b8c18661" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.145638 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.148201 4871 status_manager.go:851] "Failed to get status for pod" podUID="eaebff6e-df1b-477a-8eda-dd86e54561b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.148811 4871 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.151392 4871 status_manager.go:851] "Failed to get status for pod" podUID="eaebff6e-df1b-477a-8eda-dd86e54561b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.151933 4871 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.167657 4871 scope.go:117] "RemoveContainer" containerID="f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.187581 4871 scope.go:117] "RemoveContainer" containerID="d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.207965 4871 scope.go:117] "RemoveContainer" containerID="cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.227963 4871 scope.go:117] "RemoveContainer" containerID="0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.253282 4871 scope.go:117] "RemoveContainer" containerID="b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.286001 4871 scope.go:117] "RemoveContainer" containerID="b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b" Jan 28 15:21:25 crc kubenswrapper[4871]: E0128 15:21:25.286708 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b\": container with ID starting with b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b not found: ID does not exist" containerID="b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.286768 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b"} err="failed to get container status \"b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b\": rpc error: code = NotFound desc = could not find container \"b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b\": container with ID starting with b0b942386db9a41f0176f20ce472021ef051d6478981a7852736dcc3761e191b not found: ID does not exist" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.286809 4871 scope.go:117] "RemoveContainer" containerID="f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f" Jan 28 15:21:25 crc kubenswrapper[4871]: E0128 15:21:25.287312 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\": container with ID starting with f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f not found: ID does not exist" containerID="f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.287353 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f"} err="failed to get container status \"f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\": rpc error: code = NotFound desc = could not find container \"f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f\": container with ID starting with f5dd0e006cec79faeed69d1698e198709b83619178b61353cdb3b6b634325e5f not found: ID does not exist" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.287378 4871 scope.go:117] "RemoveContainer" containerID="d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0" Jan 28 15:21:25 crc kubenswrapper[4871]: E0128 15:21:25.287864 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\": container with ID starting with d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0 not found: ID does not exist" containerID="d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.287916 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0"} err="failed to get container status \"d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\": rpc error: code = NotFound desc = could not find container \"d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0\": container with ID starting with d08b1d829691a0b991e799b887732c1a8a514af4be03b216a802599070eb69d0 not found: ID does not exist" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.287955 4871 scope.go:117] "RemoveContainer" containerID="cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806" Jan 28 15:21:25 crc kubenswrapper[4871]: E0128 15:21:25.288421 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\": container with ID starting with cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806 not found: ID does not exist" containerID="cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.288463 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806"} err="failed to get container status \"cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\": rpc error: code = NotFound desc = could not find container \"cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806\": container with ID starting with cf60ba1c1d6c0f5cbe78984717f6056870b540928c9ab175d447a676fe24e806 not found: ID does not exist" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.288489 4871 scope.go:117] "RemoveContainer" containerID="0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572" Jan 28 15:21:25 crc kubenswrapper[4871]: E0128 15:21:25.289045 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\": container with ID starting with 0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572 not found: ID does not exist" containerID="0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.289084 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572"} err="failed to get container status \"0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\": rpc error: code = NotFound desc = could not find container \"0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572\": container with ID starting with 0c17f5556c94a4cf7cb13215fc692604015bddfdc2c5c605ab669eb7ff6ba572 not found: ID does not exist" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.289109 4871 scope.go:117] "RemoveContainer" containerID="b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a" Jan 28 15:21:25 crc kubenswrapper[4871]: E0128 15:21:25.289693 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\": container with ID starting with b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a not found: ID does not exist" containerID="b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a" Jan 28 15:21:25 crc kubenswrapper[4871]: I0128 15:21:25.289763 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a"} err="failed to get container status \"b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\": rpc error: code = NotFound desc = could not find container \"b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a\": container with ID starting with b3ab0f9fbb8b918e2a449b004a335f80c7b82f1454f2daacad2f1aeedda39f5a not found: ID does not exist" Jan 28 15:21:27 crc kubenswrapper[4871]: E0128 15:21:27.391772 4871 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.199:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:21:27 crc kubenswrapper[4871]: I0128 15:21:27.392638 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:21:27 crc kubenswrapper[4871]: E0128 15:21:27.431354 4871 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.199:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188eee45dcfb5da4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:21:27.430798756 +0000 UTC m=+239.326637078,LastTimestamp:2026-01-28 15:21:27.430798756 +0000 UTC m=+239.326637078,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:21:28 crc kubenswrapper[4871]: I0128 15:21:28.165576 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"584e7eaf7e2b5c5704cae4d887830069f8aaf1f3d04bf2de9def789ffb72731c"} Jan 28 15:21:28 crc kubenswrapper[4871]: I0128 15:21:28.166220 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8239b689b57da69156ee98b39e29cf06963c91cf2311ad44ac1a15ce7ea00487"} Jan 28 15:21:28 crc kubenswrapper[4871]: I0128 15:21:28.167116 4871 status_manager.go:851] "Failed to get status for pod" podUID="eaebff6e-df1b-477a-8eda-dd86e54561b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:28 crc kubenswrapper[4871]: E0128 15:21:28.167161 4871 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.199:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:21:28 crc kubenswrapper[4871]: E0128 15:21:28.736269 4871 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:28 crc kubenswrapper[4871]: E0128 15:21:28.736993 4871 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:28 crc kubenswrapper[4871]: E0128 15:21:28.737799 4871 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:28 crc kubenswrapper[4871]: E0128 15:21:28.738950 4871 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:28 crc kubenswrapper[4871]: E0128 15:21:28.739326 4871 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:28 crc kubenswrapper[4871]: I0128 15:21:28.739385 4871 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 28 15:21:28 crc kubenswrapper[4871]: E0128 15:21:28.739972 4871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="200ms" Jan 28 15:21:28 crc kubenswrapper[4871]: I0128 15:21:28.906125 4871 status_manager.go:851] "Failed to get status for pod" podUID="eaebff6e-df1b-477a-8eda-dd86e54561b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:28 crc kubenswrapper[4871]: E0128 15:21:28.940408 4871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="400ms" Jan 28 15:21:28 crc kubenswrapper[4871]: E0128 15:21:28.949703 4871 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.199:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188eee45dcfb5da4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:21:27.430798756 +0000 UTC m=+239.326637078,LastTimestamp:2026-01-28 15:21:27.430798756 +0000 UTC m=+239.326637078,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:21:29 crc kubenswrapper[4871]: E0128 15:21:29.341760 4871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="800ms" Jan 28 15:21:30 crc kubenswrapper[4871]: E0128 15:21:30.143143 4871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="1.6s" Jan 28 15:21:31 crc kubenswrapper[4871]: E0128 15:21:31.744438 4871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="3.2s" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.034774 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" podUID="2aa3d7cc-57c4-420c-bb92-e7fc4525a763" containerName="oauth-openshift" containerID="cri-o://92b0c7bed4c3ff927ec53a56851fb7ddab37d7d69269ca428e40f7c5c8fa2d4e" gracePeriod=15 Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.211649 4871 generic.go:334] "Generic (PLEG): container finished" podID="2aa3d7cc-57c4-420c-bb92-e7fc4525a763" containerID="92b0c7bed4c3ff927ec53a56851fb7ddab37d7d69269ca428e40f7c5c8fa2d4e" exitCode=0 Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.211749 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" event={"ID":"2aa3d7cc-57c4-420c-bb92-e7fc4525a763","Type":"ContainerDied","Data":"92b0c7bed4c3ff927ec53a56851fb7ddab37d7d69269ca428e40f7c5c8fa2d4e"} Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.461312 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.461765 4871 status_manager.go:851] "Failed to get status for pod" podUID="eaebff6e-df1b-477a-8eda-dd86e54561b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.462110 4871 status_manager.go:851] "Failed to get status for pod" podUID="2aa3d7cc-57c4-420c-bb92-e7fc4525a763" pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-dzwqq\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.594098 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-service-ca\") pod \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.594543 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-user-template-provider-selection\") pod \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.594579 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-audit-dir\") pod \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.594673 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-ocp-branding-template\") pod \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.594690 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "2aa3d7cc-57c4-420c-bb92-e7fc4525a763" (UID: "2aa3d7cc-57c4-420c-bb92-e7fc4525a763"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.594734 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-trusted-ca-bundle\") pod \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.594774 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-cliconfig\") pod \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.594824 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jnjh\" (UniqueName: \"kubernetes.io/projected/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-kube-api-access-5jnjh\") pod \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.594863 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-serving-cert\") pod \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.594914 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-user-idp-0-file-data\") pod \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.594950 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-audit-policies\") pod \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.594996 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-router-certs\") pod \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.595036 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-session\") pod \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.595089 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-user-template-login\") pod \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.595131 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-user-template-error\") pod \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\" (UID: \"2aa3d7cc-57c4-420c-bb92-e7fc4525a763\") " Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.596186 4871 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.596636 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "2aa3d7cc-57c4-420c-bb92-e7fc4525a763" (UID: "2aa3d7cc-57c4-420c-bb92-e7fc4525a763"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.598053 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "2aa3d7cc-57c4-420c-bb92-e7fc4525a763" (UID: "2aa3d7cc-57c4-420c-bb92-e7fc4525a763"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.599027 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "2aa3d7cc-57c4-420c-bb92-e7fc4525a763" (UID: "2aa3d7cc-57c4-420c-bb92-e7fc4525a763"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.599794 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "2aa3d7cc-57c4-420c-bb92-e7fc4525a763" (UID: "2aa3d7cc-57c4-420c-bb92-e7fc4525a763"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.601386 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "2aa3d7cc-57c4-420c-bb92-e7fc4525a763" (UID: "2aa3d7cc-57c4-420c-bb92-e7fc4525a763"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.603305 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-kube-api-access-5jnjh" (OuterVolumeSpecName: "kube-api-access-5jnjh") pod "2aa3d7cc-57c4-420c-bb92-e7fc4525a763" (UID: "2aa3d7cc-57c4-420c-bb92-e7fc4525a763"). InnerVolumeSpecName "kube-api-access-5jnjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.603436 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "2aa3d7cc-57c4-420c-bb92-e7fc4525a763" (UID: "2aa3d7cc-57c4-420c-bb92-e7fc4525a763"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.604072 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "2aa3d7cc-57c4-420c-bb92-e7fc4525a763" (UID: "2aa3d7cc-57c4-420c-bb92-e7fc4525a763"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.604436 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "2aa3d7cc-57c4-420c-bb92-e7fc4525a763" (UID: "2aa3d7cc-57c4-420c-bb92-e7fc4525a763"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.605227 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "2aa3d7cc-57c4-420c-bb92-e7fc4525a763" (UID: "2aa3d7cc-57c4-420c-bb92-e7fc4525a763"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.607972 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "2aa3d7cc-57c4-420c-bb92-e7fc4525a763" (UID: "2aa3d7cc-57c4-420c-bb92-e7fc4525a763"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.608274 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "2aa3d7cc-57c4-420c-bb92-e7fc4525a763" (UID: "2aa3d7cc-57c4-420c-bb92-e7fc4525a763"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.609066 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "2aa3d7cc-57c4-420c-bb92-e7fc4525a763" (UID: "2aa3d7cc-57c4-420c-bb92-e7fc4525a763"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.696998 4871 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.697033 4871 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.697044 4871 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.697053 4871 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.697061 4871 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.697071 4871 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.697083 4871 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.697098 4871 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.697108 4871 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.697117 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jnjh\" (UniqueName: \"kubernetes.io/projected/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-kube-api-access-5jnjh\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.697125 4871 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.697134 4871 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:34 crc kubenswrapper[4871]: I0128 15:21:34.697144 4871 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2aa3d7cc-57c4-420c-bb92-e7fc4525a763-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:34 crc kubenswrapper[4871]: E0128 15:21:34.946299 4871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="6.4s" Jan 28 15:21:35 crc kubenswrapper[4871]: I0128 15:21:35.223852 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 15:21:35 crc kubenswrapper[4871]: I0128 15:21:35.223956 4871 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219" exitCode=1 Jan 28 15:21:35 crc kubenswrapper[4871]: I0128 15:21:35.224096 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219"} Jan 28 15:21:35 crc kubenswrapper[4871]: I0128 15:21:35.224877 4871 scope.go:117] "RemoveContainer" containerID="464b5a13b2b6bc78900af56dab5f53cd4d2eaab5290a6a3bc9fc0264b9bcd219" Jan 28 15:21:35 crc kubenswrapper[4871]: I0128 15:21:35.225382 4871 status_manager.go:851] "Failed to get status for pod" podUID="eaebff6e-df1b-477a-8eda-dd86e54561b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:35 crc kubenswrapper[4871]: I0128 15:21:35.226242 4871 status_manager.go:851] "Failed to get status for pod" podUID="2aa3d7cc-57c4-420c-bb92-e7fc4525a763" pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-dzwqq\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:35 crc kubenswrapper[4871]: I0128 15:21:35.226769 4871 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:35 crc kubenswrapper[4871]: I0128 15:21:35.228918 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" event={"ID":"2aa3d7cc-57c4-420c-bb92-e7fc4525a763","Type":"ContainerDied","Data":"5f4ff2c4065c78d258947fcc60bf52c982e6edc0f82d96b59a4b73ed8af7fa9f"} Jan 28 15:21:35 crc kubenswrapper[4871]: I0128 15:21:35.229038 4871 scope.go:117] "RemoveContainer" containerID="92b0c7bed4c3ff927ec53a56851fb7ddab37d7d69269ca428e40f7c5c8fa2d4e" Jan 28 15:21:35 crc kubenswrapper[4871]: I0128 15:21:35.229062 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" Jan 28 15:21:35 crc kubenswrapper[4871]: I0128 15:21:35.230251 4871 status_manager.go:851] "Failed to get status for pod" podUID="eaebff6e-df1b-477a-8eda-dd86e54561b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:35 crc kubenswrapper[4871]: I0128 15:21:35.230918 4871 status_manager.go:851] "Failed to get status for pod" podUID="2aa3d7cc-57c4-420c-bb92-e7fc4525a763" pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-dzwqq\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:35 crc kubenswrapper[4871]: I0128 15:21:35.231654 4871 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:35 crc kubenswrapper[4871]: I0128 15:21:35.235101 4871 status_manager.go:851] "Failed to get status for pod" podUID="eaebff6e-df1b-477a-8eda-dd86e54561b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:35 crc kubenswrapper[4871]: I0128 15:21:35.235746 4871 status_manager.go:851] "Failed to get status for pod" podUID="2aa3d7cc-57c4-420c-bb92-e7fc4525a763" pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-dzwqq\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:35 crc kubenswrapper[4871]: I0128 15:21:35.236392 4871 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:36 crc kubenswrapper[4871]: I0128 15:21:36.244261 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 15:21:36 crc kubenswrapper[4871]: I0128 15:21:36.244679 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b57cc98061acfef51789e76a8bf0d68dc462b58a86f7d3e3a1b415b6feeaa8e9"} Jan 28 15:21:36 crc kubenswrapper[4871]: I0128 15:21:36.245836 4871 status_manager.go:851] "Failed to get status for pod" podUID="eaebff6e-df1b-477a-8eda-dd86e54561b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:36 crc kubenswrapper[4871]: I0128 15:21:36.246234 4871 status_manager.go:851] "Failed to get status for pod" podUID="2aa3d7cc-57c4-420c-bb92-e7fc4525a763" pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-dzwqq\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:36 crc kubenswrapper[4871]: I0128 15:21:36.246994 4871 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:36 crc kubenswrapper[4871]: I0128 15:21:36.903482 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:21:36 crc kubenswrapper[4871]: I0128 15:21:36.907957 4871 status_manager.go:851] "Failed to get status for pod" podUID="eaebff6e-df1b-477a-8eda-dd86e54561b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:36 crc kubenswrapper[4871]: I0128 15:21:36.909321 4871 status_manager.go:851] "Failed to get status for pod" podUID="2aa3d7cc-57c4-420c-bb92-e7fc4525a763" pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-dzwqq\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:36 crc kubenswrapper[4871]: I0128 15:21:36.910281 4871 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:36 crc kubenswrapper[4871]: I0128 15:21:36.931098 4871 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="854bc471-aa40-4adf-9ca0-bc8a5a07d111" Jan 28 15:21:36 crc kubenswrapper[4871]: I0128 15:21:36.931144 4871 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="854bc471-aa40-4adf-9ca0-bc8a5a07d111" Jan 28 15:21:36 crc kubenswrapper[4871]: E0128 15:21:36.931713 4871 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:21:36 crc kubenswrapper[4871]: I0128 15:21:36.932262 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:21:36 crc kubenswrapper[4871]: W0128 15:21:36.960359 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-287ad151507f5cf70cb1e8975c3aa38de3bdb03218cfc258f1fd1422a35ad47f WatchSource:0}: Error finding container 287ad151507f5cf70cb1e8975c3aa38de3bdb03218cfc258f1fd1422a35ad47f: Status 404 returned error can't find the container with id 287ad151507f5cf70cb1e8975c3aa38de3bdb03218cfc258f1fd1422a35ad47f Jan 28 15:21:37 crc kubenswrapper[4871]: I0128 15:21:37.251970 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"287ad151507f5cf70cb1e8975c3aa38de3bdb03218cfc258f1fd1422a35ad47f"} Jan 28 15:21:37 crc kubenswrapper[4871]: I0128 15:21:37.887162 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:21:38 crc kubenswrapper[4871]: I0128 15:21:38.262251 4871 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="8e2fff9e22b0d3c5dd3a4d3223770720d35e1cc35445402c852527366bcc2263" exitCode=0 Jan 28 15:21:38 crc kubenswrapper[4871]: I0128 15:21:38.262315 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"8e2fff9e22b0d3c5dd3a4d3223770720d35e1cc35445402c852527366bcc2263"} Jan 28 15:21:38 crc kubenswrapper[4871]: I0128 15:21:38.262736 4871 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="854bc471-aa40-4adf-9ca0-bc8a5a07d111" Jan 28 15:21:38 crc kubenswrapper[4871]: I0128 15:21:38.262764 4871 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="854bc471-aa40-4adf-9ca0-bc8a5a07d111" Jan 28 15:21:38 crc kubenswrapper[4871]: E0128 15:21:38.263420 4871 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:21:38 crc kubenswrapper[4871]: I0128 15:21:38.263463 4871 status_manager.go:851] "Failed to get status for pod" podUID="eaebff6e-df1b-477a-8eda-dd86e54561b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:38 crc kubenswrapper[4871]: I0128 15:21:38.264386 4871 status_manager.go:851] "Failed to get status for pod" podUID="2aa3d7cc-57c4-420c-bb92-e7fc4525a763" pod="openshift-authentication/oauth-openshift-558db77b4-dzwqq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-dzwqq\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:38 crc kubenswrapper[4871]: I0128 15:21:38.265013 4871 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Jan 28 15:21:39 crc kubenswrapper[4871]: I0128 15:21:39.266750 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:21:39 crc kubenswrapper[4871]: I0128 15:21:39.271512 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:21:39 crc kubenswrapper[4871]: I0128 15:21:39.276741 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"01b0c06b49435e6d82a7f3d8e878db3da138cd53a390d7f7c70cb6d8baae9e1c"} Jan 28 15:21:39 crc kubenswrapper[4871]: I0128 15:21:39.276792 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d9c48a27112b10b4ff06515db9251dc1af134221663cb4aca6913bf1dd1ae275"} Jan 28 15:21:39 crc kubenswrapper[4871]: I0128 15:21:39.276810 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"aad0f020186590523ddac242c4f93f79275f14d4fbbb5970462715961bbba468"} Jan 28 15:21:40 crc kubenswrapper[4871]: I0128 15:21:40.285315 4871 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="854bc471-aa40-4adf-9ca0-bc8a5a07d111" Jan 28 15:21:40 crc kubenswrapper[4871]: I0128 15:21:40.285566 4871 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="854bc471-aa40-4adf-9ca0-bc8a5a07d111" Jan 28 15:21:40 crc kubenswrapper[4871]: I0128 15:21:40.285745 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"16ecddec06b1098751a667e36fa19b431c37082ad9f8a39691f6d66326015040"} Jan 28 15:21:40 crc kubenswrapper[4871]: I0128 15:21:40.285825 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:21:40 crc kubenswrapper[4871]: I0128 15:21:40.285839 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"56e428ada1a75b723a6795aa3b2849269b79f3e6bdc15dcd3c9c67a4dc28e318"} Jan 28 15:21:41 crc kubenswrapper[4871]: I0128 15:21:41.933169 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:21:41 crc kubenswrapper[4871]: I0128 15:21:41.933253 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:21:41 crc kubenswrapper[4871]: I0128 15:21:41.944852 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:21:45 crc kubenswrapper[4871]: I0128 15:21:45.299452 4871 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:21:46 crc kubenswrapper[4871]: I0128 15:21:46.316188 4871 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="854bc471-aa40-4adf-9ca0-bc8a5a07d111" Jan 28 15:21:46 crc kubenswrapper[4871]: I0128 15:21:46.316233 4871 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="854bc471-aa40-4adf-9ca0-bc8a5a07d111" Jan 28 15:21:46 crc kubenswrapper[4871]: I0128 15:21:46.320437 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:21:46 crc kubenswrapper[4871]: I0128 15:21:46.322471 4871 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f4e9dd34-72a4-4fdf-9b29-2d8718df368d" Jan 28 15:21:47 crc kubenswrapper[4871]: I0128 15:21:47.321070 4871 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="854bc471-aa40-4adf-9ca0-bc8a5a07d111" Jan 28 15:21:47 crc kubenswrapper[4871]: I0128 15:21:47.321444 4871 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="854bc471-aa40-4adf-9ca0-bc8a5a07d111" Jan 28 15:21:47 crc kubenswrapper[4871]: I0128 15:21:47.894645 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:21:48 crc kubenswrapper[4871]: I0128 15:21:48.918178 4871 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f4e9dd34-72a4-4fdf-9b29-2d8718df368d" Jan 28 15:21:55 crc kubenswrapper[4871]: I0128 15:21:55.662221 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 15:21:55 crc kubenswrapper[4871]: I0128 15:21:55.785704 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 15:21:56 crc kubenswrapper[4871]: I0128 15:21:56.157754 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 15:21:56 crc kubenswrapper[4871]: I0128 15:21:56.227214 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 15:21:56 crc kubenswrapper[4871]: I0128 15:21:56.303238 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 15:21:56 crc kubenswrapper[4871]: I0128 15:21:56.519337 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:21:56 crc kubenswrapper[4871]: I0128 15:21:56.683547 4871 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 15:21:56 crc kubenswrapper[4871]: I0128 15:21:56.887100 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 15:21:56 crc kubenswrapper[4871]: I0128 15:21:56.887364 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 15:21:57 crc kubenswrapper[4871]: I0128 15:21:57.031759 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 15:21:57 crc kubenswrapper[4871]: I0128 15:21:57.035805 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 15:21:57 crc kubenswrapper[4871]: I0128 15:21:57.066481 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 15:21:57 crc kubenswrapper[4871]: I0128 15:21:57.078367 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 15:21:57 crc kubenswrapper[4871]: I0128 15:21:57.184837 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 15:21:57 crc kubenswrapper[4871]: I0128 15:21:57.364002 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 15:21:57 crc kubenswrapper[4871]: I0128 15:21:57.370188 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 15:21:57 crc kubenswrapper[4871]: I0128 15:21:57.478271 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 15:21:57 crc kubenswrapper[4871]: I0128 15:21:57.507945 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 15:21:57 crc kubenswrapper[4871]: I0128 15:21:57.525835 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 15:21:57 crc kubenswrapper[4871]: I0128 15:21:57.548737 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 15:21:57 crc kubenswrapper[4871]: I0128 15:21:57.567618 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 15:21:57 crc kubenswrapper[4871]: I0128 15:21:57.676836 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 15:21:57 crc kubenswrapper[4871]: I0128 15:21:57.742870 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 15:21:57 crc kubenswrapper[4871]: I0128 15:21:57.789006 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 15:21:57 crc kubenswrapper[4871]: I0128 15:21:57.845429 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 15:21:58 crc kubenswrapper[4871]: I0128 15:21:58.006569 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 15:21:58 crc kubenswrapper[4871]: I0128 15:21:58.042029 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 15:21:58 crc kubenswrapper[4871]: I0128 15:21:58.157224 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 15:21:58 crc kubenswrapper[4871]: I0128 15:21:58.186853 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 15:21:58 crc kubenswrapper[4871]: I0128 15:21:58.275677 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 15:21:58 crc kubenswrapper[4871]: I0128 15:21:58.454558 4871 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 15:21:58 crc kubenswrapper[4871]: I0128 15:21:58.468679 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 15:21:58 crc kubenswrapper[4871]: I0128 15:21:58.584798 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 15:21:58 crc kubenswrapper[4871]: I0128 15:21:58.782865 4871 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 15:21:58 crc kubenswrapper[4871]: I0128 15:21:58.815710 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 15:21:58 crc kubenswrapper[4871]: I0128 15:21:58.818225 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 15:21:58 crc kubenswrapper[4871]: I0128 15:21:58.934528 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 15:21:58 crc kubenswrapper[4871]: I0128 15:21:58.937343 4871 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 15:21:58 crc kubenswrapper[4871]: I0128 15:21:58.943728 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 15:21:59 crc kubenswrapper[4871]: I0128 15:21:59.033210 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 15:21:59 crc kubenswrapper[4871]: I0128 15:21:59.265926 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 15:21:59 crc kubenswrapper[4871]: I0128 15:21:59.349954 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:21:59 crc kubenswrapper[4871]: I0128 15:21:59.381745 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 15:21:59 crc kubenswrapper[4871]: I0128 15:21:59.383002 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 15:21:59 crc kubenswrapper[4871]: I0128 15:21:59.487142 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 15:21:59 crc kubenswrapper[4871]: I0128 15:21:59.532718 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 15:21:59 crc kubenswrapper[4871]: I0128 15:21:59.568262 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 15:21:59 crc kubenswrapper[4871]: I0128 15:21:59.598989 4871 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 15:21:59 crc kubenswrapper[4871]: I0128 15:21:59.604243 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-dzwqq"] Jan 28 15:21:59 crc kubenswrapper[4871]: I0128 15:21:59.604304 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:21:59 crc kubenswrapper[4871]: I0128 15:21:59.609574 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:21:59 crc kubenswrapper[4871]: I0128 15:21:59.630768 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.630739857 podStartE2EDuration="14.630739857s" podCreationTimestamp="2026-01-28 15:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:59.623835063 +0000 UTC m=+271.519673385" watchObservedRunningTime="2026-01-28 15:21:59.630739857 +0000 UTC m=+271.526578209" Jan 28 15:21:59 crc kubenswrapper[4871]: I0128 15:21:59.679629 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 15:21:59 crc kubenswrapper[4871]: I0128 15:21:59.691663 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 15:21:59 crc kubenswrapper[4871]: I0128 15:21:59.711837 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 15:21:59 crc kubenswrapper[4871]: I0128 15:21:59.750914 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 15:21:59 crc kubenswrapper[4871]: I0128 15:21:59.768381 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 15:21:59 crc kubenswrapper[4871]: I0128 15:21:59.829701 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 15:21:59 crc kubenswrapper[4871]: I0128 15:21:59.886332 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 15:21:59 crc kubenswrapper[4871]: I0128 15:21:59.889473 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 15:21:59 crc kubenswrapper[4871]: I0128 15:21:59.920333 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 15:21:59 crc kubenswrapper[4871]: I0128 15:21:59.972213 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 15:22:00 crc kubenswrapper[4871]: I0128 15:22:00.053465 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 15:22:00 crc kubenswrapper[4871]: I0128 15:22:00.064844 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 15:22:00 crc kubenswrapper[4871]: I0128 15:22:00.320307 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 15:22:00 crc kubenswrapper[4871]: I0128 15:22:00.391846 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 15:22:00 crc kubenswrapper[4871]: I0128 15:22:00.442270 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 15:22:00 crc kubenswrapper[4871]: I0128 15:22:00.530191 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 15:22:00 crc kubenswrapper[4871]: I0128 15:22:00.598909 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 15:22:00 crc kubenswrapper[4871]: I0128 15:22:00.669558 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 15:22:00 crc kubenswrapper[4871]: I0128 15:22:00.671951 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 15:22:00 crc kubenswrapper[4871]: I0128 15:22:00.682071 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 15:22:00 crc kubenswrapper[4871]: I0128 15:22:00.682331 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 15:22:00 crc kubenswrapper[4871]: I0128 15:22:00.733634 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 15:22:00 crc kubenswrapper[4871]: I0128 15:22:00.856851 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 15:22:00 crc kubenswrapper[4871]: I0128 15:22:00.878463 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 15:22:00 crc kubenswrapper[4871]: I0128 15:22:00.912406 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aa3d7cc-57c4-420c-bb92-e7fc4525a763" path="/var/lib/kubelet/pods/2aa3d7cc-57c4-420c-bb92-e7fc4525a763/volumes" Jan 28 15:22:00 crc kubenswrapper[4871]: I0128 15:22:00.982885 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 15:22:01 crc kubenswrapper[4871]: I0128 15:22:01.082044 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 15:22:01 crc kubenswrapper[4871]: I0128 15:22:01.145430 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 15:22:01 crc kubenswrapper[4871]: I0128 15:22:01.165938 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:22:01 crc kubenswrapper[4871]: I0128 15:22:01.200663 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 15:22:01 crc kubenswrapper[4871]: I0128 15:22:01.205837 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 15:22:01 crc kubenswrapper[4871]: I0128 15:22:01.434783 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 15:22:01 crc kubenswrapper[4871]: I0128 15:22:01.519570 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 15:22:01 crc kubenswrapper[4871]: I0128 15:22:01.544284 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:22:01 crc kubenswrapper[4871]: I0128 15:22:01.637465 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 15:22:01 crc kubenswrapper[4871]: I0128 15:22:01.645033 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 15:22:01 crc kubenswrapper[4871]: I0128 15:22:01.666868 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 15:22:01 crc kubenswrapper[4871]: I0128 15:22:01.673683 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 15:22:01 crc kubenswrapper[4871]: I0128 15:22:01.786414 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 15:22:01 crc kubenswrapper[4871]: I0128 15:22:01.810887 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 15:22:01 crc kubenswrapper[4871]: I0128 15:22:01.834389 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 15:22:01 crc kubenswrapper[4871]: I0128 15:22:01.925067 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 15:22:01 crc kubenswrapper[4871]: I0128 15:22:01.941500 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 15:22:01 crc kubenswrapper[4871]: I0128 15:22:01.953267 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 15:22:01 crc kubenswrapper[4871]: I0128 15:22:01.958525 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 15:22:02 crc kubenswrapper[4871]: I0128 15:22:02.067462 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 15:22:02 crc kubenswrapper[4871]: I0128 15:22:02.096713 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 15:22:02 crc kubenswrapper[4871]: I0128 15:22:02.192240 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 15:22:02 crc kubenswrapper[4871]: I0128 15:22:02.192420 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 15:22:02 crc kubenswrapper[4871]: I0128 15:22:02.257385 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 15:22:02 crc kubenswrapper[4871]: I0128 15:22:02.294999 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 15:22:02 crc kubenswrapper[4871]: I0128 15:22:02.409071 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 15:22:02 crc kubenswrapper[4871]: I0128 15:22:02.553306 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 15:22:02 crc kubenswrapper[4871]: I0128 15:22:02.574951 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 15:22:02 crc kubenswrapper[4871]: I0128 15:22:02.591127 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 15:22:02 crc kubenswrapper[4871]: I0128 15:22:02.610641 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 15:22:02 crc kubenswrapper[4871]: I0128 15:22:02.676716 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 15:22:02 crc kubenswrapper[4871]: I0128 15:22:02.770106 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 15:22:02 crc kubenswrapper[4871]: I0128 15:22:02.812862 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 15:22:02 crc kubenswrapper[4871]: I0128 15:22:02.954629 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 15:22:03 crc kubenswrapper[4871]: I0128 15:22:03.139296 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 15:22:03 crc kubenswrapper[4871]: I0128 15:22:03.162820 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 15:22:03 crc kubenswrapper[4871]: I0128 15:22:03.174464 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 15:22:03 crc kubenswrapper[4871]: I0128 15:22:03.219515 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 15:22:03 crc kubenswrapper[4871]: I0128 15:22:03.240934 4871 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 15:22:03 crc kubenswrapper[4871]: I0128 15:22:03.242945 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 15:22:03 crc kubenswrapper[4871]: I0128 15:22:03.263217 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 15:22:03 crc kubenswrapper[4871]: I0128 15:22:03.265039 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 15:22:03 crc kubenswrapper[4871]: I0128 15:22:03.279672 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 15:22:03 crc kubenswrapper[4871]: I0128 15:22:03.306633 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 15:22:03 crc kubenswrapper[4871]: I0128 15:22:03.311531 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 15:22:03 crc kubenswrapper[4871]: I0128 15:22:03.386491 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 15:22:03 crc kubenswrapper[4871]: I0128 15:22:03.413240 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:22:03 crc kubenswrapper[4871]: I0128 15:22:03.489308 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 15:22:03 crc kubenswrapper[4871]: I0128 15:22:03.514885 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 15:22:03 crc kubenswrapper[4871]: I0128 15:22:03.573567 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 15:22:03 crc kubenswrapper[4871]: I0128 15:22:03.636036 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 15:22:03 crc kubenswrapper[4871]: I0128 15:22:03.670011 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:22:03 crc kubenswrapper[4871]: I0128 15:22:03.671167 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 15:22:03 crc kubenswrapper[4871]: I0128 15:22:03.722652 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 15:22:03 crc kubenswrapper[4871]: I0128 15:22:03.944667 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 15:22:03 crc kubenswrapper[4871]: I0128 15:22:03.993822 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 15:22:04 crc kubenswrapper[4871]: I0128 15:22:04.011367 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 15:22:04 crc kubenswrapper[4871]: I0128 15:22:04.175706 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 15:22:04 crc kubenswrapper[4871]: I0128 15:22:04.195224 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 15:22:04 crc kubenswrapper[4871]: I0128 15:22:04.263683 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 15:22:04 crc kubenswrapper[4871]: I0128 15:22:04.327686 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 15:22:04 crc kubenswrapper[4871]: I0128 15:22:04.340568 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:22:04 crc kubenswrapper[4871]: I0128 15:22:04.412524 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 15:22:04 crc kubenswrapper[4871]: I0128 15:22:04.487808 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 15:22:04 crc kubenswrapper[4871]: I0128 15:22:04.531486 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 15:22:04 crc kubenswrapper[4871]: I0128 15:22:04.621569 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 15:22:04 crc kubenswrapper[4871]: I0128 15:22:04.625791 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 15:22:04 crc kubenswrapper[4871]: I0128 15:22:04.636358 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 15:22:04 crc kubenswrapper[4871]: I0128 15:22:04.804357 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 15:22:04 crc kubenswrapper[4871]: I0128 15:22:04.907551 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 15:22:05 crc kubenswrapper[4871]: I0128 15:22:05.001625 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 15:22:05 crc kubenswrapper[4871]: I0128 15:22:05.010999 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 15:22:05 crc kubenswrapper[4871]: I0128 15:22:05.022903 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 15:22:05 crc kubenswrapper[4871]: I0128 15:22:05.045965 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 15:22:05 crc kubenswrapper[4871]: I0128 15:22:05.054329 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 15:22:05 crc kubenswrapper[4871]: I0128 15:22:05.076058 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 15:22:05 crc kubenswrapper[4871]: I0128 15:22:05.096144 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 15:22:05 crc kubenswrapper[4871]: I0128 15:22:05.123802 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 15:22:05 crc kubenswrapper[4871]: I0128 15:22:05.283433 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 15:22:05 crc kubenswrapper[4871]: I0128 15:22:05.295962 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 15:22:05 crc kubenswrapper[4871]: I0128 15:22:05.336269 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 15:22:05 crc kubenswrapper[4871]: I0128 15:22:05.416348 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 15:22:05 crc kubenswrapper[4871]: I0128 15:22:05.449093 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 15:22:05 crc kubenswrapper[4871]: I0128 15:22:05.452494 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 15:22:05 crc kubenswrapper[4871]: I0128 15:22:05.462831 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 15:22:05 crc kubenswrapper[4871]: I0128 15:22:05.516961 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 15:22:05 crc kubenswrapper[4871]: I0128 15:22:05.521225 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 15:22:05 crc kubenswrapper[4871]: I0128 15:22:05.564793 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:22:05 crc kubenswrapper[4871]: I0128 15:22:05.626831 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 15:22:05 crc kubenswrapper[4871]: I0128 15:22:05.645657 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 15:22:05 crc kubenswrapper[4871]: I0128 15:22:05.972837 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 15:22:06 crc kubenswrapper[4871]: I0128 15:22:06.058331 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 15:22:06 crc kubenswrapper[4871]: I0128 15:22:06.095918 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 15:22:06 crc kubenswrapper[4871]: I0128 15:22:06.153875 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 15:22:06 crc kubenswrapper[4871]: I0128 15:22:06.202503 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 15:22:06 crc kubenswrapper[4871]: I0128 15:22:06.235146 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 15:22:06 crc kubenswrapper[4871]: I0128 15:22:06.242764 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 15:22:06 crc kubenswrapper[4871]: I0128 15:22:06.255765 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 15:22:06 crc kubenswrapper[4871]: I0128 15:22:06.390094 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 15:22:06 crc kubenswrapper[4871]: I0128 15:22:06.452812 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 15:22:06 crc kubenswrapper[4871]: I0128 15:22:06.470320 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 15:22:06 crc kubenswrapper[4871]: I0128 15:22:06.476242 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 15:22:06 crc kubenswrapper[4871]: I0128 15:22:06.546968 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 15:22:06 crc kubenswrapper[4871]: I0128 15:22:06.550826 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 15:22:06 crc kubenswrapper[4871]: I0128 15:22:06.724610 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.007927 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.024710 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.036384 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.041680 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.063939 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.081222 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.087871 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.121883 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.133756 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.219028 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.521704 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.684698 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.699835 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.746188 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.776729 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.825110 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.884678 4871 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.884951 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://584e7eaf7e2b5c5704cae4d887830069f8aaf1f3d04bf2de9def789ffb72731c" gracePeriod=5 Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.957980 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.984939 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5477954dc8-7xx9r"] Jan 28 15:22:07 crc kubenswrapper[4871]: E0128 15:22:07.985122 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa3d7cc-57c4-420c-bb92-e7fc4525a763" containerName="oauth-openshift" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.985134 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa3d7cc-57c4-420c-bb92-e7fc4525a763" containerName="oauth-openshift" Jan 28 15:22:07 crc kubenswrapper[4871]: E0128 15:22:07.985147 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaebff6e-df1b-477a-8eda-dd86e54561b0" containerName="installer" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.985154 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaebff6e-df1b-477a-8eda-dd86e54561b0" containerName="installer" Jan 28 15:22:07 crc kubenswrapper[4871]: E0128 15:22:07.985172 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.985179 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.985258 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.985268 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aa3d7cc-57c4-420c-bb92-e7fc4525a763" containerName="oauth-openshift" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.985278 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaebff6e-df1b-477a-8eda-dd86e54561b0" containerName="installer" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.986209 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.987791 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.989238 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.989249 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.989265 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.989314 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.989368 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.989655 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.989977 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.990052 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.990079 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.990187 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 15:22:07 crc kubenswrapper[4871]: I0128 15:22:07.994088 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:07.997231 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:07.997886 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.005715 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5477954dc8-7xx9r"] Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.009916 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.131848 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-user-template-error\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.131902 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-system-router-certs\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.131927 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-user-template-login\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.131946 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.131976 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.131993 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.132013 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ec26607c-c692-468f-8ac0-7ae237ea08cf-audit-policies\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.132095 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.132147 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.132191 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-system-service-ca\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.132218 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zf2v\" (UniqueName: \"kubernetes.io/projected/ec26607c-c692-468f-8ac0-7ae237ea08cf-kube-api-access-6zf2v\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.132244 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.132284 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-system-session\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.132341 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ec26607c-c692-468f-8ac0-7ae237ea08cf-audit-dir\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.193005 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.234146 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.234535 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.234790 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-system-service-ca\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.234990 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zf2v\" (UniqueName: \"kubernetes.io/projected/ec26607c-c692-468f-8ac0-7ae237ea08cf-kube-api-access-6zf2v\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.235211 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.235508 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-system-session\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.235742 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-system-service-ca\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.235928 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ec26607c-c692-468f-8ac0-7ae237ea08cf-audit-dir\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.236150 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-user-template-error\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.236251 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ec26607c-c692-468f-8ac0-7ae237ea08cf-audit-dir\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.236287 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.236214 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.236622 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-system-router-certs\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.236719 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-user-template-login\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.236818 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.236940 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.237025 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.237121 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ec26607c-c692-468f-8ac0-7ae237ea08cf-audit-policies\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.238672 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ec26607c-c692-468f-8ac0-7ae237ea08cf-audit-policies\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.242377 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-system-router-certs\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.244960 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-user-template-error\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.247524 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.248049 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-system-session\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.250242 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.252373 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zf2v\" (UniqueName: \"kubernetes.io/projected/ec26607c-c692-468f-8ac0-7ae237ea08cf-kube-api-access-6zf2v\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.252774 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-user-template-login\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.254311 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.254664 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ec26607c-c692-468f-8ac0-7ae237ea08cf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5477954dc8-7xx9r\" (UID: \"ec26607c-c692-468f-8ac0-7ae237ea08cf\") " pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.308811 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.461059 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.479378 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.507799 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.692940 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.693918 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5477954dc8-7xx9r"] Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.796070 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.800560 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.819324 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.835531 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.871451 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.931597 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 15:22:08 crc kubenswrapper[4871]: I0128 15:22:08.935283 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 15:22:09 crc kubenswrapper[4871]: I0128 15:22:09.049087 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:22:09 crc kubenswrapper[4871]: I0128 15:22:09.116143 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 15:22:09 crc kubenswrapper[4871]: I0128 15:22:09.231123 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 15:22:09 crc kubenswrapper[4871]: I0128 15:22:09.237031 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 15:22:09 crc kubenswrapper[4871]: I0128 15:22:09.453204 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" event={"ID":"ec26607c-c692-468f-8ac0-7ae237ea08cf","Type":"ContainerStarted","Data":"b207f54191ca80b1823d381910943d20f922f49ab50b0e644af4e85a3e82a963"} Jan 28 15:22:09 crc kubenswrapper[4871]: I0128 15:22:09.453258 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" event={"ID":"ec26607c-c692-468f-8ac0-7ae237ea08cf","Type":"ContainerStarted","Data":"064bb03c25d65d41d6212756b3217920a105740856340d0ea7d67e59c736b2f4"} Jan 28 15:22:09 crc kubenswrapper[4871]: I0128 15:22:09.453887 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:09 crc kubenswrapper[4871]: I0128 15:22:09.483005 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" Jan 28 15:22:09 crc kubenswrapper[4871]: I0128 15:22:09.494880 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5477954dc8-7xx9r" podStartSLOduration=60.494847021 podStartE2EDuration="1m0.494847021s" podCreationTimestamp="2026-01-28 15:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:22:09.484738123 +0000 UTC m=+281.380576505" watchObservedRunningTime="2026-01-28 15:22:09.494847021 +0000 UTC m=+281.390685383" Jan 28 15:22:09 crc kubenswrapper[4871]: I0128 15:22:09.519359 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 15:22:09 crc kubenswrapper[4871]: I0128 15:22:09.700829 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 15:22:09 crc kubenswrapper[4871]: I0128 15:22:09.703983 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 15:22:09 crc kubenswrapper[4871]: I0128 15:22:09.717512 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 15:22:09 crc kubenswrapper[4871]: I0128 15:22:09.737727 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:22:09 crc kubenswrapper[4871]: I0128 15:22:09.757899 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 15:22:09 crc kubenswrapper[4871]: I0128 15:22:09.827118 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 15:22:09 crc kubenswrapper[4871]: I0128 15:22:09.840768 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 15:22:09 crc kubenswrapper[4871]: I0128 15:22:09.843567 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 15:22:09 crc kubenswrapper[4871]: I0128 15:22:09.890713 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 15:22:10 crc kubenswrapper[4871]: I0128 15:22:10.112729 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 15:22:10 crc kubenswrapper[4871]: I0128 15:22:10.140163 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 15:22:10 crc kubenswrapper[4871]: I0128 15:22:10.280083 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 15:22:10 crc kubenswrapper[4871]: I0128 15:22:10.322976 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 15:22:10 crc kubenswrapper[4871]: I0128 15:22:10.488275 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 15:22:10 crc kubenswrapper[4871]: I0128 15:22:10.503469 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 15:22:10 crc kubenswrapper[4871]: I0128 15:22:10.507877 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 15:22:11 crc kubenswrapper[4871]: I0128 15:22:11.118186 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 15:22:11 crc kubenswrapper[4871]: I0128 15:22:11.123900 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:22:11 crc kubenswrapper[4871]: I0128 15:22:11.413326 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 15:22:11 crc kubenswrapper[4871]: I0128 15:22:11.421298 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 15:22:13 crc kubenswrapper[4871]: I0128 15:22:13.478497 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 15:22:13 crc kubenswrapper[4871]: I0128 15:22:13.478581 4871 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="584e7eaf7e2b5c5704cae4d887830069f8aaf1f3d04bf2de9def789ffb72731c" exitCode=137 Jan 28 15:22:13 crc kubenswrapper[4871]: I0128 15:22:13.886906 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 15:22:13 crc kubenswrapper[4871]: I0128 15:22:13.886985 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:14 crc kubenswrapper[4871]: I0128 15:22:14.014702 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:22:14 crc kubenswrapper[4871]: I0128 15:22:14.014781 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:22:14 crc kubenswrapper[4871]: I0128 15:22:14.014834 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:22:14 crc kubenswrapper[4871]: I0128 15:22:14.014853 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:22:14 crc kubenswrapper[4871]: I0128 15:22:14.014929 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:22:14 crc kubenswrapper[4871]: I0128 15:22:14.014874 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:22:14 crc kubenswrapper[4871]: I0128 15:22:14.015006 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:22:14 crc kubenswrapper[4871]: I0128 15:22:14.015254 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:22:14 crc kubenswrapper[4871]: I0128 15:22:14.015345 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:22:14 crc kubenswrapper[4871]: I0128 15:22:14.015795 4871 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:14 crc kubenswrapper[4871]: I0128 15:22:14.015829 4871 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:14 crc kubenswrapper[4871]: I0128 15:22:14.015848 4871 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:14 crc kubenswrapper[4871]: I0128 15:22:14.015867 4871 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:14 crc kubenswrapper[4871]: I0128 15:22:14.024960 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:22:14 crc kubenswrapper[4871]: I0128 15:22:14.117057 4871 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:14 crc kubenswrapper[4871]: I0128 15:22:14.487911 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 15:22:14 crc kubenswrapper[4871]: I0128 15:22:14.488018 4871 scope.go:117] "RemoveContainer" containerID="584e7eaf7e2b5c5704cae4d887830069f8aaf1f3d04bf2de9def789ffb72731c" Jan 28 15:22:14 crc kubenswrapper[4871]: I0128 15:22:14.488110 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:14 crc kubenswrapper[4871]: I0128 15:22:14.911775 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 28 15:22:19 crc kubenswrapper[4871]: I0128 15:22:19.272962 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 15:22:28 crc kubenswrapper[4871]: I0128 15:22:28.670086 4871 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 28 15:22:28 crc kubenswrapper[4871]: I0128 15:22:28.871825 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bh7lg"] Jan 28 15:22:28 crc kubenswrapper[4871]: I0128 15:22:28.872429 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" podUID="e5226d39-f16c-4e81-8ae2-8d5f54a8a683" containerName="controller-manager" containerID="cri-o://2bd4944a1a5d7c2403586d326a0845dd3f136102ea1a011df35342c707489c08" gracePeriod=30 Jan 28 15:22:28 crc kubenswrapper[4871]: I0128 15:22:28.971248 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l"] Jan 28 15:22:28 crc kubenswrapper[4871]: I0128 15:22:28.971518 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" podUID="1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6" containerName="route-controller-manager" containerID="cri-o://7f3444257c0454fce7920caaca7a72c463dadca25722f288d55eea1c602e62f3" gracePeriod=30 Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.232056 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.281117 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.321659 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-config\") pod \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\" (UID: \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\") " Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.321735 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-serving-cert\") pod \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\" (UID: \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\") " Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.321766 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs2wf\" (UniqueName: \"kubernetes.io/projected/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-kube-api-access-fs2wf\") pod \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\" (UID: \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\") " Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.321936 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-client-ca\") pod \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\" (UID: \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\") " Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.321975 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-proxy-ca-bundles\") pod \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\" (UID: \"e5226d39-f16c-4e81-8ae2-8d5f54a8a683\") " Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.322818 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e5226d39-f16c-4e81-8ae2-8d5f54a8a683" (UID: "e5226d39-f16c-4e81-8ae2-8d5f54a8a683"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.322829 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-client-ca" (OuterVolumeSpecName: "client-ca") pod "e5226d39-f16c-4e81-8ae2-8d5f54a8a683" (UID: "e5226d39-f16c-4e81-8ae2-8d5f54a8a683"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.322882 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-config" (OuterVolumeSpecName: "config") pod "e5226d39-f16c-4e81-8ae2-8d5f54a8a683" (UID: "e5226d39-f16c-4e81-8ae2-8d5f54a8a683"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.328059 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-kube-api-access-fs2wf" (OuterVolumeSpecName: "kube-api-access-fs2wf") pod "e5226d39-f16c-4e81-8ae2-8d5f54a8a683" (UID: "e5226d39-f16c-4e81-8ae2-8d5f54a8a683"). InnerVolumeSpecName "kube-api-access-fs2wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.328443 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e5226d39-f16c-4e81-8ae2-8d5f54a8a683" (UID: "e5226d39-f16c-4e81-8ae2-8d5f54a8a683"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.423288 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6-serving-cert\") pod \"1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6\" (UID: \"1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6\") " Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.423343 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6595\" (UniqueName: \"kubernetes.io/projected/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6-kube-api-access-w6595\") pod \"1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6\" (UID: \"1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6\") " Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.423371 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6-client-ca\") pod \"1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6\" (UID: \"1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6\") " Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.423391 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6-config\") pod \"1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6\" (UID: \"1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6\") " Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.423573 4871 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.423600 4871 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.423609 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.423617 4871 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.423625 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs2wf\" (UniqueName: \"kubernetes.io/projected/e5226d39-f16c-4e81-8ae2-8d5f54a8a683-kube-api-access-fs2wf\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.424212 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6-client-ca" (OuterVolumeSpecName: "client-ca") pod "1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6" (UID: "1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.424228 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6-config" (OuterVolumeSpecName: "config") pod "1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6" (UID: "1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.426660 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6-kube-api-access-w6595" (OuterVolumeSpecName: "kube-api-access-w6595") pod "1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6" (UID: "1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6"). InnerVolumeSpecName "kube-api-access-w6595". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.427194 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6" (UID: "1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.524537 4871 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.524639 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6595\" (UniqueName: \"kubernetes.io/projected/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6-kube-api-access-w6595\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.524667 4871 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.524687 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.593814 4871 generic.go:334] "Generic (PLEG): container finished" podID="1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6" containerID="7f3444257c0454fce7920caaca7a72c463dadca25722f288d55eea1c602e62f3" exitCode=0 Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.593920 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.593927 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" event={"ID":"1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6","Type":"ContainerDied","Data":"7f3444257c0454fce7920caaca7a72c463dadca25722f288d55eea1c602e62f3"} Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.594122 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l" event={"ID":"1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6","Type":"ContainerDied","Data":"4aa0d9a0b83bae98e812ce1fc836f456b8366b1c9346e8ec544565bdbee537e0"} Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.594174 4871 scope.go:117] "RemoveContainer" containerID="7f3444257c0454fce7920caaca7a72c463dadca25722f288d55eea1c602e62f3" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.595527 4871 generic.go:334] "Generic (PLEG): container finished" podID="e5226d39-f16c-4e81-8ae2-8d5f54a8a683" containerID="2bd4944a1a5d7c2403586d326a0845dd3f136102ea1a011df35342c707489c08" exitCode=0 Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.595559 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" event={"ID":"e5226d39-f16c-4e81-8ae2-8d5f54a8a683","Type":"ContainerDied","Data":"2bd4944a1a5d7c2403586d326a0845dd3f136102ea1a011df35342c707489c08"} Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.595615 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" event={"ID":"e5226d39-f16c-4e81-8ae2-8d5f54a8a683","Type":"ContainerDied","Data":"5bbb7a4551178db27120a612bb76c77062154588e8e9d04831016b89d1d1f074"} Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.595670 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bh7lg" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.609692 4871 scope.go:117] "RemoveContainer" containerID="7f3444257c0454fce7920caaca7a72c463dadca25722f288d55eea1c602e62f3" Jan 28 15:22:29 crc kubenswrapper[4871]: E0128 15:22:29.610155 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f3444257c0454fce7920caaca7a72c463dadca25722f288d55eea1c602e62f3\": container with ID starting with 7f3444257c0454fce7920caaca7a72c463dadca25722f288d55eea1c602e62f3 not found: ID does not exist" containerID="7f3444257c0454fce7920caaca7a72c463dadca25722f288d55eea1c602e62f3" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.610195 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f3444257c0454fce7920caaca7a72c463dadca25722f288d55eea1c602e62f3"} err="failed to get container status \"7f3444257c0454fce7920caaca7a72c463dadca25722f288d55eea1c602e62f3\": rpc error: code = NotFound desc = could not find container \"7f3444257c0454fce7920caaca7a72c463dadca25722f288d55eea1c602e62f3\": container with ID starting with 7f3444257c0454fce7920caaca7a72c463dadca25722f288d55eea1c602e62f3 not found: ID does not exist" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.610227 4871 scope.go:117] "RemoveContainer" containerID="2bd4944a1a5d7c2403586d326a0845dd3f136102ea1a011df35342c707489c08" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.628944 4871 scope.go:117] "RemoveContainer" containerID="2bd4944a1a5d7c2403586d326a0845dd3f136102ea1a011df35342c707489c08" Jan 28 15:22:29 crc kubenswrapper[4871]: E0128 15:22:29.629511 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bd4944a1a5d7c2403586d326a0845dd3f136102ea1a011df35342c707489c08\": container with ID starting with 2bd4944a1a5d7c2403586d326a0845dd3f136102ea1a011df35342c707489c08 not found: ID does not exist" containerID="2bd4944a1a5d7c2403586d326a0845dd3f136102ea1a011df35342c707489c08" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.629535 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bd4944a1a5d7c2403586d326a0845dd3f136102ea1a011df35342c707489c08"} err="failed to get container status \"2bd4944a1a5d7c2403586d326a0845dd3f136102ea1a011df35342c707489c08\": rpc error: code = NotFound desc = could not find container \"2bd4944a1a5d7c2403586d326a0845dd3f136102ea1a011df35342c707489c08\": container with ID starting with 2bd4944a1a5d7c2403586d326a0845dd3f136102ea1a011df35342c707489c08 not found: ID does not exist" Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.640391 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bh7lg"] Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.648405 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bh7lg"] Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.653408 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l"] Jan 28 15:22:29 crc kubenswrapper[4871]: I0128 15:22:29.674628 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q676l"] Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.434347 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86f8dcd44f-7chlb"] Jan 28 15:22:30 crc kubenswrapper[4871]: E0128 15:22:30.434895 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6" containerName="route-controller-manager" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.435004 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6" containerName="route-controller-manager" Jan 28 15:22:30 crc kubenswrapper[4871]: E0128 15:22:30.435126 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5226d39-f16c-4e81-8ae2-8d5f54a8a683" containerName="controller-manager" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.435214 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5226d39-f16c-4e81-8ae2-8d5f54a8a683" containerName="controller-manager" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.435404 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5226d39-f16c-4e81-8ae2-8d5f54a8a683" containerName="controller-manager" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.435493 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6" containerName="route-controller-manager" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.436082 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86f8dcd44f-7chlb" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.439145 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7db789c79-hzbqd"] Jan 28 15:22:30 crc kubenswrapper[4871]: W0128 15:22:30.439505 4871 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 28 15:22:30 crc kubenswrapper[4871]: E0128 15:22:30.439555 4871 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:22:30 crc kubenswrapper[4871]: W0128 15:22:30.439560 4871 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 28 15:22:30 crc kubenswrapper[4871]: W0128 15:22:30.439505 4871 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 28 15:22:30 crc kubenswrapper[4871]: E0128 15:22:30.439723 4871 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:22:30 crc kubenswrapper[4871]: W0128 15:22:30.439514 4871 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 28 15:22:30 crc kubenswrapper[4871]: E0128 15:22:30.439756 4871 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:22:30 crc kubenswrapper[4871]: W0128 15:22:30.439561 4871 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 28 15:22:30 crc kubenswrapper[4871]: E0128 15:22:30.439783 4871 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:22:30 crc kubenswrapper[4871]: E0128 15:22:30.439680 4871 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:22:30 crc kubenswrapper[4871]: W0128 15:22:30.439917 4871 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: secrets "openshift-controller-manager-sa-dockercfg-msq4c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 28 15:22:30 crc kubenswrapper[4871]: W0128 15:22:30.439923 4871 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: configmaps "openshift-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 28 15:22:30 crc kubenswrapper[4871]: E0128 15:22:30.439936 4871 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-sa-dockercfg-msq4c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:22:30 crc kubenswrapper[4871]: E0128 15:22:30.439944 4871 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-global-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.440017 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7db789c79-hzbqd" Jan 28 15:22:30 crc kubenswrapper[4871]: W0128 15:22:30.441665 4871 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 28 15:22:30 crc kubenswrapper[4871]: E0128 15:22:30.441826 4871 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:22:30 crc kubenswrapper[4871]: W0128 15:22:30.441719 4871 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": failed to list *v1.Secret: secrets "route-controller-manager-sa-dockercfg-h2zr2" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 28 15:22:30 crc kubenswrapper[4871]: W0128 15:22:30.442019 4871 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 28 15:22:30 crc kubenswrapper[4871]: E0128 15:22:30.442049 4871 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:22:30 crc kubenswrapper[4871]: W0128 15:22:30.441885 4871 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 28 15:22:30 crc kubenswrapper[4871]: E0128 15:22:30.442140 4871 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:22:30 crc kubenswrapper[4871]: W0128 15:22:30.442185 4871 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 28 15:22:30 crc kubenswrapper[4871]: W0128 15:22:30.442197 4871 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 28 15:22:30 crc kubenswrapper[4871]: E0128 15:22:30.442205 4871 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:22:30 crc kubenswrapper[4871]: E0128 15:22:30.442211 4871 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:22:30 crc kubenswrapper[4871]: E0128 15:22:30.442442 4871 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-h2zr2\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"route-controller-manager-sa-dockercfg-h2zr2\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.470807 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7db789c79-hzbqd"] Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.504406 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86f8dcd44f-7chlb"] Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.535756 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4082e41-54bb-49de-b052-21529f67e0d6-config\") pod \"controller-manager-86f8dcd44f-7chlb\" (UID: \"b4082e41-54bb-49de-b052-21529f67e0d6\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-7chlb" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.536132 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f47cc48-3512-46ab-a69f-6193c1d89efe-config\") pod \"route-controller-manager-7db789c79-hzbqd\" (UID: \"6f47cc48-3512-46ab-a69f-6193c1d89efe\") " pod="openshift-route-controller-manager/route-controller-manager-7db789c79-hzbqd" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.536314 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw7np\" (UniqueName: \"kubernetes.io/projected/6f47cc48-3512-46ab-a69f-6193c1d89efe-kube-api-access-tw7np\") pod \"route-controller-manager-7db789c79-hzbqd\" (UID: \"6f47cc48-3512-46ab-a69f-6193c1d89efe\") " pod="openshift-route-controller-manager/route-controller-manager-7db789c79-hzbqd" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.536397 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4082e41-54bb-49de-b052-21529f67e0d6-proxy-ca-bundles\") pod \"controller-manager-86f8dcd44f-7chlb\" (UID: \"b4082e41-54bb-49de-b052-21529f67e0d6\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-7chlb" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.536482 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f47cc48-3512-46ab-a69f-6193c1d89efe-client-ca\") pod \"route-controller-manager-7db789c79-hzbqd\" (UID: \"6f47cc48-3512-46ab-a69f-6193c1d89efe\") " pod="openshift-route-controller-manager/route-controller-manager-7db789c79-hzbqd" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.536559 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4082e41-54bb-49de-b052-21529f67e0d6-client-ca\") pod \"controller-manager-86f8dcd44f-7chlb\" (UID: \"b4082e41-54bb-49de-b052-21529f67e0d6\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-7chlb" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.536705 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5grf\" (UniqueName: \"kubernetes.io/projected/b4082e41-54bb-49de-b052-21529f67e0d6-kube-api-access-x5grf\") pod \"controller-manager-86f8dcd44f-7chlb\" (UID: \"b4082e41-54bb-49de-b052-21529f67e0d6\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-7chlb" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.536826 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f47cc48-3512-46ab-a69f-6193c1d89efe-serving-cert\") pod \"route-controller-manager-7db789c79-hzbqd\" (UID: \"6f47cc48-3512-46ab-a69f-6193c1d89efe\") " pod="openshift-route-controller-manager/route-controller-manager-7db789c79-hzbqd" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.536919 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4082e41-54bb-49de-b052-21529f67e0d6-serving-cert\") pod \"controller-manager-86f8dcd44f-7chlb\" (UID: \"b4082e41-54bb-49de-b052-21529f67e0d6\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-7chlb" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.557609 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7db789c79-hzbqd"] Jan 28 15:22:30 crc kubenswrapper[4871]: E0128 15:22:30.557905 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-tw7np serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-7db789c79-hzbqd" podUID="6f47cc48-3512-46ab-a69f-6193c1d89efe" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.565261 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86f8dcd44f-7chlb"] Jan 28 15:22:30 crc kubenswrapper[4871]: E0128 15:22:30.565804 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-x5grf proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-86f8dcd44f-7chlb" podUID="b4082e41-54bb-49de-b052-21529f67e0d6" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.602338 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7db789c79-hzbqd" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.602347 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86f8dcd44f-7chlb" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.608784 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7db789c79-hzbqd" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.613946 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86f8dcd44f-7chlb" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.637944 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f47cc48-3512-46ab-a69f-6193c1d89efe-serving-cert\") pod \"route-controller-manager-7db789c79-hzbqd\" (UID: \"6f47cc48-3512-46ab-a69f-6193c1d89efe\") " pod="openshift-route-controller-manager/route-controller-manager-7db789c79-hzbqd" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.637995 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4082e41-54bb-49de-b052-21529f67e0d6-serving-cert\") pod \"controller-manager-86f8dcd44f-7chlb\" (UID: \"b4082e41-54bb-49de-b052-21529f67e0d6\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-7chlb" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.638052 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4082e41-54bb-49de-b052-21529f67e0d6-config\") pod \"controller-manager-86f8dcd44f-7chlb\" (UID: \"b4082e41-54bb-49de-b052-21529f67e0d6\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-7chlb" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.638083 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f47cc48-3512-46ab-a69f-6193c1d89efe-config\") pod \"route-controller-manager-7db789c79-hzbqd\" (UID: \"6f47cc48-3512-46ab-a69f-6193c1d89efe\") " pod="openshift-route-controller-manager/route-controller-manager-7db789c79-hzbqd" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.638113 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw7np\" (UniqueName: \"kubernetes.io/projected/6f47cc48-3512-46ab-a69f-6193c1d89efe-kube-api-access-tw7np\") pod \"route-controller-manager-7db789c79-hzbqd\" (UID: \"6f47cc48-3512-46ab-a69f-6193c1d89efe\") " pod="openshift-route-controller-manager/route-controller-manager-7db789c79-hzbqd" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.638243 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4082e41-54bb-49de-b052-21529f67e0d6-proxy-ca-bundles\") pod \"controller-manager-86f8dcd44f-7chlb\" (UID: \"b4082e41-54bb-49de-b052-21529f67e0d6\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-7chlb" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.638274 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f47cc48-3512-46ab-a69f-6193c1d89efe-client-ca\") pod \"route-controller-manager-7db789c79-hzbqd\" (UID: \"6f47cc48-3512-46ab-a69f-6193c1d89efe\") " pod="openshift-route-controller-manager/route-controller-manager-7db789c79-hzbqd" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.638295 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4082e41-54bb-49de-b052-21529f67e0d6-client-ca\") pod \"controller-manager-86f8dcd44f-7chlb\" (UID: \"b4082e41-54bb-49de-b052-21529f67e0d6\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-7chlb" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.638320 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5grf\" (UniqueName: \"kubernetes.io/projected/b4082e41-54bb-49de-b052-21529f67e0d6-kube-api-access-x5grf\") pod \"controller-manager-86f8dcd44f-7chlb\" (UID: \"b4082e41-54bb-49de-b052-21529f67e0d6\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-7chlb" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.913036 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6" path="/var/lib/kubelet/pods/1075f46c-ae1a-4c66-b7ad-5f5f5942fdd6/volumes" Jan 28 15:22:30 crc kubenswrapper[4871]: I0128 15:22:30.914488 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5226d39-f16c-4e81-8ae2-8d5f54a8a683" path="/var/lib/kubelet/pods/e5226d39-f16c-4e81-8ae2-8d5f54a8a683/volumes" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.260280 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.353548 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.411573 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.419915 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f47cc48-3512-46ab-a69f-6193c1d89efe-client-ca\") pod \"route-controller-manager-7db789c79-hzbqd\" (UID: \"6f47cc48-3512-46ab-a69f-6193c1d89efe\") " pod="openshift-route-controller-manager/route-controller-manager-7db789c79-hzbqd" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.454745 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.460751 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4082e41-54bb-49de-b052-21529f67e0d6-config\") pod \"controller-manager-86f8dcd44f-7chlb\" (UID: \"b4082e41-54bb-49de-b052-21529f67e0d6\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-7chlb" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.471373 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.518671 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.536833 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f47cc48-3512-46ab-a69f-6193c1d89efe-serving-cert\") pod \"route-controller-manager-7db789c79-hzbqd\" (UID: \"6f47cc48-3512-46ab-a69f-6193c1d89efe\") " pod="openshift-route-controller-manager/route-controller-manager-7db789c79-hzbqd" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.558674 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.560289 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f47cc48-3512-46ab-a69f-6193c1d89efe-client-ca\") pod \"6f47cc48-3512-46ab-a69f-6193c1d89efe\" (UID: \"6f47cc48-3512-46ab-a69f-6193c1d89efe\") " Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.560414 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4082e41-54bb-49de-b052-21529f67e0d6-config\") pod \"b4082e41-54bb-49de-b052-21529f67e0d6\" (UID: \"b4082e41-54bb-49de-b052-21529f67e0d6\") " Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.560726 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f47cc48-3512-46ab-a69f-6193c1d89efe-client-ca" (OuterVolumeSpecName: "client-ca") pod "6f47cc48-3512-46ab-a69f-6193c1d89efe" (UID: "6f47cc48-3512-46ab-a69f-6193c1d89efe"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.560956 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4082e41-54bb-49de-b052-21529f67e0d6-config" (OuterVolumeSpecName: "config") pod "b4082e41-54bb-49de-b052-21529f67e0d6" (UID: "b4082e41-54bb-49de-b052-21529f67e0d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.561349 4871 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f47cc48-3512-46ab-a69f-6193c1d89efe-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.561396 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4082e41-54bb-49de-b052-21529f67e0d6-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.607863 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86f8dcd44f-7chlb" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.607969 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7db789c79-hzbqd" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.608355 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:22:31 crc kubenswrapper[4871]: E0128 15:22:31.638696 4871 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 28 15:22:31 crc kubenswrapper[4871]: E0128 15:22:31.638781 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b4082e41-54bb-49de-b052-21529f67e0d6-client-ca podName:b4082e41-54bb-49de-b052-21529f67e0d6 nodeName:}" failed. No retries permitted until 2026-01-28 15:22:32.138763449 +0000 UTC m=+304.034601781 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/b4082e41-54bb-49de-b052-21529f67e0d6-client-ca") pod "controller-manager-86f8dcd44f-7chlb" (UID: "b4082e41-54bb-49de-b052-21529f67e0d6") : failed to sync configmap cache: timed out waiting for the condition Jan 28 15:22:31 crc kubenswrapper[4871]: E0128 15:22:31.639001 4871 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 28 15:22:31 crc kubenswrapper[4871]: E0128 15:22:31.639032 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6f47cc48-3512-46ab-a69f-6193c1d89efe-config podName:6f47cc48-3512-46ab-a69f-6193c1d89efe nodeName:}" failed. No retries permitted until 2026-01-28 15:22:32.139022686 +0000 UTC m=+304.034861018 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6f47cc48-3512-46ab-a69f-6193c1d89efe-config") pod "route-controller-manager-7db789c79-hzbqd" (UID: "6f47cc48-3512-46ab-a69f-6193c1d89efe") : failed to sync configmap cache: timed out waiting for the condition Jan 28 15:22:31 crc kubenswrapper[4871]: E0128 15:22:31.639073 4871 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 28 15:22:31 crc kubenswrapper[4871]: E0128 15:22:31.639100 4871 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Jan 28 15:22:31 crc kubenswrapper[4871]: E0128 15:22:31.639168 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b4082e41-54bb-49de-b052-21529f67e0d6-serving-cert podName:b4082e41-54bb-49de-b052-21529f67e0d6 nodeName:}" failed. No retries permitted until 2026-01-28 15:22:32.13914417 +0000 UTC m=+304.034982552 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b4082e41-54bb-49de-b052-21529f67e0d6-serving-cert") pod "controller-manager-86f8dcd44f-7chlb" (UID: "b4082e41-54bb-49de-b052-21529f67e0d6") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:22:31 crc kubenswrapper[4871]: E0128 15:22:31.639194 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b4082e41-54bb-49de-b052-21529f67e0d6-proxy-ca-bundles podName:b4082e41-54bb-49de-b052-21529f67e0d6 nodeName:}" failed. No retries permitted until 2026-01-28 15:22:32.139182451 +0000 UTC m=+304.035020903 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/b4082e41-54bb-49de-b052-21529f67e0d6-proxy-ca-bundles") pod "controller-manager-86f8dcd44f-7chlb" (UID: "b4082e41-54bb-49de-b052-21529f67e0d6") : failed to sync configmap cache: timed out waiting for the condition Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.644647 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86f8dcd44f-7chlb"] Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.646929 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:22:31 crc kubenswrapper[4871]: E0128 15:22:31.654672 4871 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 28 15:22:31 crc kubenswrapper[4871]: E0128 15:22:31.654722 4871 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Jan 28 15:22:31 crc kubenswrapper[4871]: E0128 15:22:31.654744 4871 projected.go:194] Error preparing data for projected volume kube-api-access-x5grf for pod openshift-controller-manager/controller-manager-86f8dcd44f-7chlb: [failed to sync configmap cache: timed out waiting for the condition, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Jan 28 15:22:31 crc kubenswrapper[4871]: E0128 15:22:31.654821 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b4082e41-54bb-49de-b052-21529f67e0d6-kube-api-access-x5grf podName:b4082e41-54bb-49de-b052-21529f67e0d6 nodeName:}" failed. No retries permitted until 2026-01-28 15:22:32.154794879 +0000 UTC m=+304.050633221 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x5grf" (UniqueName: "kubernetes.io/projected/b4082e41-54bb-49de-b052-21529f67e0d6-kube-api-access-x5grf") pod "controller-manager-86f8dcd44f-7chlb" (UID: "b4082e41-54bb-49de-b052-21529f67e0d6") : [failed to sync configmap cache: timed out waiting for the condition, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.659001 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw7np\" (UniqueName: \"kubernetes.io/projected/6f47cc48-3512-46ab-a69f-6193c1d89efe-kube-api-access-tw7np\") pod \"route-controller-manager-7db789c79-hzbqd\" (UID: \"6f47cc48-3512-46ab-a69f-6193c1d89efe\") " pod="openshift-route-controller-manager/route-controller-manager-7db789c79-hzbqd" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.662382 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f47cc48-3512-46ab-a69f-6193c1d89efe-serving-cert\") pod \"6f47cc48-3512-46ab-a69f-6193c1d89efe\" (UID: \"6f47cc48-3512-46ab-a69f-6193c1d89efe\") " Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.666714 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-76595bd9c4-rp4c9"] Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.667373 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f47cc48-3512-46ab-a69f-6193c1d89efe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6f47cc48-3512-46ab-a69f-6193c1d89efe" (UID: "6f47cc48-3512-46ab-a69f-6193c1d89efe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.667776 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.669213 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-86f8dcd44f-7chlb"] Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.670286 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.670523 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.670642 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.671555 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.673047 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.674338 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.675409 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76595bd9c4-rp4c9"] Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.680129 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.763287 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw7np\" (UniqueName: \"kubernetes.io/projected/6f47cc48-3512-46ab-a69f-6193c1d89efe-kube-api-access-tw7np\") pod \"6f47cc48-3512-46ab-a69f-6193c1d89efe\" (UID: \"6f47cc48-3512-46ab-a69f-6193c1d89efe\") " Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.763484 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-serving-cert\") pod \"controller-manager-76595bd9c4-rp4c9\" (UID: \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\") " pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.763572 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-proxy-ca-bundles\") pod \"controller-manager-76595bd9c4-rp4c9\" (UID: \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\") " pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.763650 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-config\") pod \"controller-manager-76595bd9c4-rp4c9\" (UID: \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\") " pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.763674 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhxrt\" (UniqueName: \"kubernetes.io/projected/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-kube-api-access-lhxrt\") pod \"controller-manager-76595bd9c4-rp4c9\" (UID: \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\") " pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.763735 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-client-ca\") pod \"controller-manager-76595bd9c4-rp4c9\" (UID: \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\") " pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.763813 4871 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f47cc48-3512-46ab-a69f-6193c1d89efe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.763825 4871 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4082e41-54bb-49de-b052-21529f67e0d6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.763858 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5grf\" (UniqueName: \"kubernetes.io/projected/b4082e41-54bb-49de-b052-21529f67e0d6-kube-api-access-x5grf\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.763866 4871 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4082e41-54bb-49de-b052-21529f67e0d6-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.763877 4871 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4082e41-54bb-49de-b052-21529f67e0d6-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.766123 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f47cc48-3512-46ab-a69f-6193c1d89efe-kube-api-access-tw7np" (OuterVolumeSpecName: "kube-api-access-tw7np") pod "6f47cc48-3512-46ab-a69f-6193c1d89efe" (UID: "6f47cc48-3512-46ab-a69f-6193c1d89efe"). InnerVolumeSpecName "kube-api-access-tw7np". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.864759 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-client-ca\") pod \"controller-manager-76595bd9c4-rp4c9\" (UID: \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\") " pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.864833 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-serving-cert\") pod \"controller-manager-76595bd9c4-rp4c9\" (UID: \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\") " pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.864878 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-proxy-ca-bundles\") pod \"controller-manager-76595bd9c4-rp4c9\" (UID: \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\") " pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.864919 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-config\") pod \"controller-manager-76595bd9c4-rp4c9\" (UID: \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\") " pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.864958 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhxrt\" (UniqueName: \"kubernetes.io/projected/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-kube-api-access-lhxrt\") pod \"controller-manager-76595bd9c4-rp4c9\" (UID: \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\") " pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.865015 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw7np\" (UniqueName: \"kubernetes.io/projected/6f47cc48-3512-46ab-a69f-6193c1d89efe-kube-api-access-tw7np\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.866000 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-proxy-ca-bundles\") pod \"controller-manager-76595bd9c4-rp4c9\" (UID: \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\") " pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.866060 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-client-ca\") pod \"controller-manager-76595bd9c4-rp4c9\" (UID: \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\") " pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.866375 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-config\") pod \"controller-manager-76595bd9c4-rp4c9\" (UID: \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\") " pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.868566 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-serving-cert\") pod \"controller-manager-76595bd9c4-rp4c9\" (UID: \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\") " pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.894945 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhxrt\" (UniqueName: \"kubernetes.io/projected/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-kube-api-access-lhxrt\") pod \"controller-manager-76595bd9c4-rp4c9\" (UID: \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\") " pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.931268 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.953260 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7db789c79-hzbqd"] Jan 28 15:22:31 crc kubenswrapper[4871]: I0128 15:22:31.957790 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7db789c79-hzbqd"] Jan 28 15:22:32 crc kubenswrapper[4871]: I0128 15:22:32.013153 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" Jan 28 15:22:32 crc kubenswrapper[4871]: I0128 15:22:32.067977 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f47cc48-3512-46ab-a69f-6193c1d89efe-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:32 crc kubenswrapper[4871]: I0128 15:22:32.229887 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76595bd9c4-rp4c9"] Jan 28 15:22:32 crc kubenswrapper[4871]: E0128 15:22:32.369952 4871 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5226d39_f16c_4e81_8ae2_8d5f54a8a683.slice/crio-2bd4944a1a5d7c2403586d326a0845dd3f136102ea1a011df35342c707489c08.scope\": RecentStats: unable to find data in memory cache]" Jan 28 15:22:32 crc kubenswrapper[4871]: I0128 15:22:32.614327 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" event={"ID":"dd90d88a-927e-4a05-a572-1ddcfc3fe44b","Type":"ContainerStarted","Data":"256c28b5b11401607e41394fd0a88aae696d260619a6207e7ed69967d0b542ab"} Jan 28 15:22:32 crc kubenswrapper[4871]: I0128 15:22:32.614371 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" event={"ID":"dd90d88a-927e-4a05-a572-1ddcfc3fe44b","Type":"ContainerStarted","Data":"46351a19349b672a898b85e3c1e15e58548f2b56841b23c3b557cdc9fab4856d"} Jan 28 15:22:32 crc kubenswrapper[4871]: I0128 15:22:32.614950 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" Jan 28 15:22:32 crc kubenswrapper[4871]: I0128 15:22:32.620344 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" Jan 28 15:22:32 crc kubenswrapper[4871]: I0128 15:22:32.631186 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" podStartSLOduration=2.631171011 podStartE2EDuration="2.631171011s" podCreationTimestamp="2026-01-28 15:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:22:32.630122847 +0000 UTC m=+304.525961169" watchObservedRunningTime="2026-01-28 15:22:32.631171011 +0000 UTC m=+304.527009333" Jan 28 15:22:32 crc kubenswrapper[4871]: I0128 15:22:32.910044 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f47cc48-3512-46ab-a69f-6193c1d89efe" path="/var/lib/kubelet/pods/6f47cc48-3512-46ab-a69f-6193c1d89efe/volumes" Jan 28 15:22:32 crc kubenswrapper[4871]: I0128 15:22:32.910386 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4082e41-54bb-49de-b052-21529f67e0d6" path="/var/lib/kubelet/pods/b4082e41-54bb-49de-b052-21529f67e0d6/volumes" Jan 28 15:22:33 crc kubenswrapper[4871]: I0128 15:22:33.777892 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq"] Jan 28 15:22:33 crc kubenswrapper[4871]: I0128 15:22:33.782607 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq" Jan 28 15:22:33 crc kubenswrapper[4871]: I0128 15:22:33.787851 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq"] Jan 28 15:22:33 crc kubenswrapper[4871]: I0128 15:22:33.788508 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:22:33 crc kubenswrapper[4871]: I0128 15:22:33.788563 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:22:33 crc kubenswrapper[4871]: I0128 15:22:33.788854 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:22:33 crc kubenswrapper[4871]: I0128 15:22:33.789153 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:22:33 crc kubenswrapper[4871]: I0128 15:22:33.789232 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:22:33 crc kubenswrapper[4871]: I0128 15:22:33.795265 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:22:33 crc kubenswrapper[4871]: I0128 15:22:33.892370 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7bf87bb-311e-448c-bf55-6fee43f5d997-client-ca\") pod \"route-controller-manager-6fdc4c5b5b-2rmmq\" (UID: \"b7bf87bb-311e-448c-bf55-6fee43f5d997\") " pod="openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq" Jan 28 15:22:33 crc kubenswrapper[4871]: I0128 15:22:33.892438 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trsvf\" (UniqueName: \"kubernetes.io/projected/b7bf87bb-311e-448c-bf55-6fee43f5d997-kube-api-access-trsvf\") pod \"route-controller-manager-6fdc4c5b5b-2rmmq\" (UID: \"b7bf87bb-311e-448c-bf55-6fee43f5d997\") " pod="openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq" Jan 28 15:22:33 crc kubenswrapper[4871]: I0128 15:22:33.892464 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7bf87bb-311e-448c-bf55-6fee43f5d997-config\") pod \"route-controller-manager-6fdc4c5b5b-2rmmq\" (UID: \"b7bf87bb-311e-448c-bf55-6fee43f5d997\") " pod="openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq" Jan 28 15:22:33 crc kubenswrapper[4871]: I0128 15:22:33.892559 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7bf87bb-311e-448c-bf55-6fee43f5d997-serving-cert\") pod \"route-controller-manager-6fdc4c5b5b-2rmmq\" (UID: \"b7bf87bb-311e-448c-bf55-6fee43f5d997\") " pod="openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq" Jan 28 15:22:33 crc kubenswrapper[4871]: I0128 15:22:33.994446 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7bf87bb-311e-448c-bf55-6fee43f5d997-serving-cert\") pod \"route-controller-manager-6fdc4c5b5b-2rmmq\" (UID: \"b7bf87bb-311e-448c-bf55-6fee43f5d997\") " pod="openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq" Jan 28 15:22:33 crc kubenswrapper[4871]: I0128 15:22:33.994562 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7bf87bb-311e-448c-bf55-6fee43f5d997-client-ca\") pod \"route-controller-manager-6fdc4c5b5b-2rmmq\" (UID: \"b7bf87bb-311e-448c-bf55-6fee43f5d997\") " pod="openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq" Jan 28 15:22:33 crc kubenswrapper[4871]: I0128 15:22:33.994638 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trsvf\" (UniqueName: \"kubernetes.io/projected/b7bf87bb-311e-448c-bf55-6fee43f5d997-kube-api-access-trsvf\") pod \"route-controller-manager-6fdc4c5b5b-2rmmq\" (UID: \"b7bf87bb-311e-448c-bf55-6fee43f5d997\") " pod="openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq" Jan 28 15:22:33 crc kubenswrapper[4871]: I0128 15:22:33.994676 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7bf87bb-311e-448c-bf55-6fee43f5d997-config\") pod \"route-controller-manager-6fdc4c5b5b-2rmmq\" (UID: \"b7bf87bb-311e-448c-bf55-6fee43f5d997\") " pod="openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq" Jan 28 15:22:33 crc kubenswrapper[4871]: I0128 15:22:33.996344 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7bf87bb-311e-448c-bf55-6fee43f5d997-client-ca\") pod \"route-controller-manager-6fdc4c5b5b-2rmmq\" (UID: \"b7bf87bb-311e-448c-bf55-6fee43f5d997\") " pod="openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq" Jan 28 15:22:34 crc kubenswrapper[4871]: I0128 15:22:34.014193 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7bf87bb-311e-448c-bf55-6fee43f5d997-config\") pod \"route-controller-manager-6fdc4c5b5b-2rmmq\" (UID: \"b7bf87bb-311e-448c-bf55-6fee43f5d997\") " pod="openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq" Jan 28 15:22:34 crc kubenswrapper[4871]: I0128 15:22:34.014397 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7bf87bb-311e-448c-bf55-6fee43f5d997-serving-cert\") pod \"route-controller-manager-6fdc4c5b5b-2rmmq\" (UID: \"b7bf87bb-311e-448c-bf55-6fee43f5d997\") " pod="openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq" Jan 28 15:22:34 crc kubenswrapper[4871]: I0128 15:22:34.022932 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trsvf\" (UniqueName: \"kubernetes.io/projected/b7bf87bb-311e-448c-bf55-6fee43f5d997-kube-api-access-trsvf\") pod \"route-controller-manager-6fdc4c5b5b-2rmmq\" (UID: \"b7bf87bb-311e-448c-bf55-6fee43f5d997\") " pod="openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq" Jan 28 15:22:34 crc kubenswrapper[4871]: I0128 15:22:34.113296 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq" Jan 28 15:22:34 crc kubenswrapper[4871]: I0128 15:22:34.533932 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq"] Jan 28 15:22:34 crc kubenswrapper[4871]: I0128 15:22:34.640722 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq" event={"ID":"b7bf87bb-311e-448c-bf55-6fee43f5d997","Type":"ContainerStarted","Data":"5a225fddbe8bf8349aef7fb11ddd903d7262ee91bdd5ee8fd0ed0df1c22fba2a"} Jan 28 15:22:35 crc kubenswrapper[4871]: I0128 15:22:35.647363 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq" event={"ID":"b7bf87bb-311e-448c-bf55-6fee43f5d997","Type":"ContainerStarted","Data":"bb69806bd367acdc048a92f4c91d9213869831b2940cd6b928110dbc534af973"} Jan 28 15:22:35 crc kubenswrapper[4871]: I0128 15:22:35.647644 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq" Jan 28 15:22:35 crc kubenswrapper[4871]: I0128 15:22:35.656139 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq" Jan 28 15:22:35 crc kubenswrapper[4871]: I0128 15:22:35.668373 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq" podStartSLOduration=5.668348917 podStartE2EDuration="5.668348917s" podCreationTimestamp="2026-01-28 15:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:22:35.663173449 +0000 UTC m=+307.559011771" watchObservedRunningTime="2026-01-28 15:22:35.668348917 +0000 UTC m=+307.564187269" Jan 28 15:22:38 crc kubenswrapper[4871]: I0128 15:22:38.407145 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 15:22:42 crc kubenswrapper[4871]: E0128 15:22:42.518476 4871 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5226d39_f16c_4e81_8ae2_8d5f54a8a683.slice/crio-2bd4944a1a5d7c2403586d326a0845dd3f136102ea1a011df35342c707489c08.scope\": RecentStats: unable to find data in memory cache]" Jan 28 15:22:46 crc kubenswrapper[4871]: I0128 15:22:46.616548 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 15:22:48 crc kubenswrapper[4871]: I0128 15:22:48.871854 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76595bd9c4-rp4c9"] Jan 28 15:22:48 crc kubenswrapper[4871]: I0128 15:22:48.873604 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" podUID="dd90d88a-927e-4a05-a572-1ddcfc3fe44b" containerName="controller-manager" containerID="cri-o://256c28b5b11401607e41394fd0a88aae696d260619a6207e7ed69967d0b542ab" gracePeriod=30 Jan 28 15:22:48 crc kubenswrapper[4871]: I0128 15:22:48.893924 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq"] Jan 28 15:22:48 crc kubenswrapper[4871]: I0128 15:22:48.894424 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq" podUID="b7bf87bb-311e-448c-bf55-6fee43f5d997" containerName="route-controller-manager" containerID="cri-o://bb69806bd367acdc048a92f4c91d9213869831b2940cd6b928110dbc534af973" gracePeriod=30 Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.170846 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.414370 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.502352 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7bf87bb-311e-448c-bf55-6fee43f5d997-config\") pod \"b7bf87bb-311e-448c-bf55-6fee43f5d997\" (UID: \"b7bf87bb-311e-448c-bf55-6fee43f5d997\") " Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.502407 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trsvf\" (UniqueName: \"kubernetes.io/projected/b7bf87bb-311e-448c-bf55-6fee43f5d997-kube-api-access-trsvf\") pod \"b7bf87bb-311e-448c-bf55-6fee43f5d997\" (UID: \"b7bf87bb-311e-448c-bf55-6fee43f5d997\") " Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.502508 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7bf87bb-311e-448c-bf55-6fee43f5d997-client-ca\") pod \"b7bf87bb-311e-448c-bf55-6fee43f5d997\" (UID: \"b7bf87bb-311e-448c-bf55-6fee43f5d997\") " Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.502534 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7bf87bb-311e-448c-bf55-6fee43f5d997-serving-cert\") pod \"b7bf87bb-311e-448c-bf55-6fee43f5d997\" (UID: \"b7bf87bb-311e-448c-bf55-6fee43f5d997\") " Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.503346 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7bf87bb-311e-448c-bf55-6fee43f5d997-client-ca" (OuterVolumeSpecName: "client-ca") pod "b7bf87bb-311e-448c-bf55-6fee43f5d997" (UID: "b7bf87bb-311e-448c-bf55-6fee43f5d997"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.503521 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7bf87bb-311e-448c-bf55-6fee43f5d997-config" (OuterVolumeSpecName: "config") pod "b7bf87bb-311e-448c-bf55-6fee43f5d997" (UID: "b7bf87bb-311e-448c-bf55-6fee43f5d997"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.507909 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7bf87bb-311e-448c-bf55-6fee43f5d997-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b7bf87bb-311e-448c-bf55-6fee43f5d997" (UID: "b7bf87bb-311e-448c-bf55-6fee43f5d997"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.508003 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7bf87bb-311e-448c-bf55-6fee43f5d997-kube-api-access-trsvf" (OuterVolumeSpecName: "kube-api-access-trsvf") pod "b7bf87bb-311e-448c-bf55-6fee43f5d997" (UID: "b7bf87bb-311e-448c-bf55-6fee43f5d997"). InnerVolumeSpecName "kube-api-access-trsvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.512172 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.603479 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhxrt\" (UniqueName: \"kubernetes.io/projected/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-kube-api-access-lhxrt\") pod \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\" (UID: \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\") " Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.603560 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-config\") pod \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\" (UID: \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\") " Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.603614 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-serving-cert\") pod \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\" (UID: \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\") " Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.603693 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-client-ca\") pod \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\" (UID: \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\") " Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.603731 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-proxy-ca-bundles\") pod \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\" (UID: \"dd90d88a-927e-4a05-a572-1ddcfc3fe44b\") " Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.603953 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7bf87bb-311e-448c-bf55-6fee43f5d997-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.603974 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trsvf\" (UniqueName: \"kubernetes.io/projected/b7bf87bb-311e-448c-bf55-6fee43f5d997-kube-api-access-trsvf\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.603987 4871 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7bf87bb-311e-448c-bf55-6fee43f5d997-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.603997 4871 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7bf87bb-311e-448c-bf55-6fee43f5d997-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.604726 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-client-ca" (OuterVolumeSpecName: "client-ca") pod "dd90d88a-927e-4a05-a572-1ddcfc3fe44b" (UID: "dd90d88a-927e-4a05-a572-1ddcfc3fe44b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.604736 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "dd90d88a-927e-4a05-a572-1ddcfc3fe44b" (UID: "dd90d88a-927e-4a05-a572-1ddcfc3fe44b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.604973 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-config" (OuterVolumeSpecName: "config") pod "dd90d88a-927e-4a05-a572-1ddcfc3fe44b" (UID: "dd90d88a-927e-4a05-a572-1ddcfc3fe44b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.606937 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dd90d88a-927e-4a05-a572-1ddcfc3fe44b" (UID: "dd90d88a-927e-4a05-a572-1ddcfc3fe44b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.607454 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-kube-api-access-lhxrt" (OuterVolumeSpecName: "kube-api-access-lhxrt") pod "dd90d88a-927e-4a05-a572-1ddcfc3fe44b" (UID: "dd90d88a-927e-4a05-a572-1ddcfc3fe44b"). InnerVolumeSpecName "kube-api-access-lhxrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.705631 4871 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.705667 4871 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.705679 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhxrt\" (UniqueName: \"kubernetes.io/projected/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-kube-api-access-lhxrt\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.705688 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.705696 4871 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd90d88a-927e-4a05-a572-1ddcfc3fe44b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.741128 4871 generic.go:334] "Generic (PLEG): container finished" podID="dd90d88a-927e-4a05-a572-1ddcfc3fe44b" containerID="256c28b5b11401607e41394fd0a88aae696d260619a6207e7ed69967d0b542ab" exitCode=0 Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.741213 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.741269 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" event={"ID":"dd90d88a-927e-4a05-a572-1ddcfc3fe44b","Type":"ContainerDied","Data":"256c28b5b11401607e41394fd0a88aae696d260619a6207e7ed69967d0b542ab"} Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.741398 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76595bd9c4-rp4c9" event={"ID":"dd90d88a-927e-4a05-a572-1ddcfc3fe44b","Type":"ContainerDied","Data":"46351a19349b672a898b85e3c1e15e58548f2b56841b23c3b557cdc9fab4856d"} Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.741482 4871 scope.go:117] "RemoveContainer" containerID="256c28b5b11401607e41394fd0a88aae696d260619a6207e7ed69967d0b542ab" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.746710 4871 generic.go:334] "Generic (PLEG): container finished" podID="b7bf87bb-311e-448c-bf55-6fee43f5d997" containerID="bb69806bd367acdc048a92f4c91d9213869831b2940cd6b928110dbc534af973" exitCode=0 Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.746759 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq" event={"ID":"b7bf87bb-311e-448c-bf55-6fee43f5d997","Type":"ContainerDied","Data":"bb69806bd367acdc048a92f4c91d9213869831b2940cd6b928110dbc534af973"} Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.746790 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq" event={"ID":"b7bf87bb-311e-448c-bf55-6fee43f5d997","Type":"ContainerDied","Data":"5a225fddbe8bf8349aef7fb11ddd903d7262ee91bdd5ee8fd0ed0df1c22fba2a"} Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.746859 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.771671 4871 scope.go:117] "RemoveContainer" containerID="256c28b5b11401607e41394fd0a88aae696d260619a6207e7ed69967d0b542ab" Jan 28 15:22:49 crc kubenswrapper[4871]: E0128 15:22:49.772881 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"256c28b5b11401607e41394fd0a88aae696d260619a6207e7ed69967d0b542ab\": container with ID starting with 256c28b5b11401607e41394fd0a88aae696d260619a6207e7ed69967d0b542ab not found: ID does not exist" containerID="256c28b5b11401607e41394fd0a88aae696d260619a6207e7ed69967d0b542ab" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.773224 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"256c28b5b11401607e41394fd0a88aae696d260619a6207e7ed69967d0b542ab"} err="failed to get container status \"256c28b5b11401607e41394fd0a88aae696d260619a6207e7ed69967d0b542ab\": rpc error: code = NotFound desc = could not find container \"256c28b5b11401607e41394fd0a88aae696d260619a6207e7ed69967d0b542ab\": container with ID starting with 256c28b5b11401607e41394fd0a88aae696d260619a6207e7ed69967d0b542ab not found: ID does not exist" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.773267 4871 scope.go:117] "RemoveContainer" containerID="bb69806bd367acdc048a92f4c91d9213869831b2940cd6b928110dbc534af973" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.792830 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76595bd9c4-rp4c9"] Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.800704 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-76595bd9c4-rp4c9"] Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.801356 4871 scope.go:117] "RemoveContainer" containerID="bb69806bd367acdc048a92f4c91d9213869831b2940cd6b928110dbc534af973" Jan 28 15:22:49 crc kubenswrapper[4871]: E0128 15:22:49.805426 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb69806bd367acdc048a92f4c91d9213869831b2940cd6b928110dbc534af973\": container with ID starting with bb69806bd367acdc048a92f4c91d9213869831b2940cd6b928110dbc534af973 not found: ID does not exist" containerID="bb69806bd367acdc048a92f4c91d9213869831b2940cd6b928110dbc534af973" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.805472 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb69806bd367acdc048a92f4c91d9213869831b2940cd6b928110dbc534af973"} err="failed to get container status \"bb69806bd367acdc048a92f4c91d9213869831b2940cd6b928110dbc534af973\": rpc error: code = NotFound desc = could not find container \"bb69806bd367acdc048a92f4c91d9213869831b2940cd6b928110dbc534af973\": container with ID starting with bb69806bd367acdc048a92f4c91d9213869831b2940cd6b928110dbc534af973 not found: ID does not exist" Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.808034 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq"] Jan 28 15:22:49 crc kubenswrapper[4871]: I0128 15:22:49.827962 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fdc4c5b5b-2rmmq"] Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.787453 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b"] Jan 28 15:22:50 crc kubenswrapper[4871]: E0128 15:22:50.787717 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7bf87bb-311e-448c-bf55-6fee43f5d997" containerName="route-controller-manager" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.787731 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7bf87bb-311e-448c-bf55-6fee43f5d997" containerName="route-controller-manager" Jan 28 15:22:50 crc kubenswrapper[4871]: E0128 15:22:50.787744 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd90d88a-927e-4a05-a572-1ddcfc3fe44b" containerName="controller-manager" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.787752 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd90d88a-927e-4a05-a572-1ddcfc3fe44b" containerName="controller-manager" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.787851 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd90d88a-927e-4a05-a572-1ddcfc3fe44b" containerName="controller-manager" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.787869 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7bf87bb-311e-448c-bf55-6fee43f5d997" containerName="route-controller-manager" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.788264 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.789868 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.790794 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.790927 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.791026 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.790835 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7db789c79-jjv4m"] Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.791064 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.792503 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7db789c79-jjv4m" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.793119 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.800618 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.800782 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.800846 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.801067 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.801211 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.801537 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.803072 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.807506 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b"] Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.827329 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7db789c79-jjv4m"] Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.912042 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7bf87bb-311e-448c-bf55-6fee43f5d997" path="/var/lib/kubelet/pods/b7bf87bb-311e-448c-bf55-6fee43f5d997/volumes" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.913133 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd90d88a-927e-4a05-a572-1ddcfc3fe44b" path="/var/lib/kubelet/pods/dd90d88a-927e-4a05-a572-1ddcfc3fe44b/volumes" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.920061 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8e0c9cd-c99c-4713-accc-1eb8dc21278b-config\") pod \"controller-manager-86f8dcd44f-lkq4b\" (UID: \"d8e0c9cd-c99c-4713-accc-1eb8dc21278b\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.920151 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8e0c9cd-c99c-4713-accc-1eb8dc21278b-proxy-ca-bundles\") pod \"controller-manager-86f8dcd44f-lkq4b\" (UID: \"d8e0c9cd-c99c-4713-accc-1eb8dc21278b\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.920191 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f22f8e31-8486-440d-895a-3f7d41b204bc-serving-cert\") pod \"route-controller-manager-7db789c79-jjv4m\" (UID: \"f22f8e31-8486-440d-895a-3f7d41b204bc\") " pod="openshift-route-controller-manager/route-controller-manager-7db789c79-jjv4m" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.920280 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk2lj\" (UniqueName: \"kubernetes.io/projected/f22f8e31-8486-440d-895a-3f7d41b204bc-kube-api-access-rk2lj\") pod \"route-controller-manager-7db789c79-jjv4m\" (UID: \"f22f8e31-8486-440d-895a-3f7d41b204bc\") " pod="openshift-route-controller-manager/route-controller-manager-7db789c79-jjv4m" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.920322 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwpxs\" (UniqueName: \"kubernetes.io/projected/d8e0c9cd-c99c-4713-accc-1eb8dc21278b-kube-api-access-dwpxs\") pod \"controller-manager-86f8dcd44f-lkq4b\" (UID: \"d8e0c9cd-c99c-4713-accc-1eb8dc21278b\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.920351 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8e0c9cd-c99c-4713-accc-1eb8dc21278b-client-ca\") pod \"controller-manager-86f8dcd44f-lkq4b\" (UID: \"d8e0c9cd-c99c-4713-accc-1eb8dc21278b\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.920426 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f22f8e31-8486-440d-895a-3f7d41b204bc-client-ca\") pod \"route-controller-manager-7db789c79-jjv4m\" (UID: \"f22f8e31-8486-440d-895a-3f7d41b204bc\") " pod="openshift-route-controller-manager/route-controller-manager-7db789c79-jjv4m" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.920484 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f22f8e31-8486-440d-895a-3f7d41b204bc-config\") pod \"route-controller-manager-7db789c79-jjv4m\" (UID: \"f22f8e31-8486-440d-895a-3f7d41b204bc\") " pod="openshift-route-controller-manager/route-controller-manager-7db789c79-jjv4m" Jan 28 15:22:50 crc kubenswrapper[4871]: I0128 15:22:50.920533 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8e0c9cd-c99c-4713-accc-1eb8dc21278b-serving-cert\") pod \"controller-manager-86f8dcd44f-lkq4b\" (UID: \"d8e0c9cd-c99c-4713-accc-1eb8dc21278b\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.021696 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8e0c9cd-c99c-4713-accc-1eb8dc21278b-proxy-ca-bundles\") pod \"controller-manager-86f8dcd44f-lkq4b\" (UID: \"d8e0c9cd-c99c-4713-accc-1eb8dc21278b\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.021744 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f22f8e31-8486-440d-895a-3f7d41b204bc-serving-cert\") pod \"route-controller-manager-7db789c79-jjv4m\" (UID: \"f22f8e31-8486-440d-895a-3f7d41b204bc\") " pod="openshift-route-controller-manager/route-controller-manager-7db789c79-jjv4m" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.021770 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk2lj\" (UniqueName: \"kubernetes.io/projected/f22f8e31-8486-440d-895a-3f7d41b204bc-kube-api-access-rk2lj\") pod \"route-controller-manager-7db789c79-jjv4m\" (UID: \"f22f8e31-8486-440d-895a-3f7d41b204bc\") " pod="openshift-route-controller-manager/route-controller-manager-7db789c79-jjv4m" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.021789 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwpxs\" (UniqueName: \"kubernetes.io/projected/d8e0c9cd-c99c-4713-accc-1eb8dc21278b-kube-api-access-dwpxs\") pod \"controller-manager-86f8dcd44f-lkq4b\" (UID: \"d8e0c9cd-c99c-4713-accc-1eb8dc21278b\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.021823 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8e0c9cd-c99c-4713-accc-1eb8dc21278b-client-ca\") pod \"controller-manager-86f8dcd44f-lkq4b\" (UID: \"d8e0c9cd-c99c-4713-accc-1eb8dc21278b\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.021841 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f22f8e31-8486-440d-895a-3f7d41b204bc-client-ca\") pod \"route-controller-manager-7db789c79-jjv4m\" (UID: \"f22f8e31-8486-440d-895a-3f7d41b204bc\") " pod="openshift-route-controller-manager/route-controller-manager-7db789c79-jjv4m" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.021860 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f22f8e31-8486-440d-895a-3f7d41b204bc-config\") pod \"route-controller-manager-7db789c79-jjv4m\" (UID: \"f22f8e31-8486-440d-895a-3f7d41b204bc\") " pod="openshift-route-controller-manager/route-controller-manager-7db789c79-jjv4m" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.021903 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8e0c9cd-c99c-4713-accc-1eb8dc21278b-serving-cert\") pod \"controller-manager-86f8dcd44f-lkq4b\" (UID: \"d8e0c9cd-c99c-4713-accc-1eb8dc21278b\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.021920 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8e0c9cd-c99c-4713-accc-1eb8dc21278b-config\") pod \"controller-manager-86f8dcd44f-lkq4b\" (UID: \"d8e0c9cd-c99c-4713-accc-1eb8dc21278b\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.023672 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f22f8e31-8486-440d-895a-3f7d41b204bc-config\") pod \"route-controller-manager-7db789c79-jjv4m\" (UID: \"f22f8e31-8486-440d-895a-3f7d41b204bc\") " pod="openshift-route-controller-manager/route-controller-manager-7db789c79-jjv4m" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.023869 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f22f8e31-8486-440d-895a-3f7d41b204bc-client-ca\") pod \"route-controller-manager-7db789c79-jjv4m\" (UID: \"f22f8e31-8486-440d-895a-3f7d41b204bc\") " pod="openshift-route-controller-manager/route-controller-manager-7db789c79-jjv4m" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.023990 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8e0c9cd-c99c-4713-accc-1eb8dc21278b-config\") pod \"controller-manager-86f8dcd44f-lkq4b\" (UID: \"d8e0c9cd-c99c-4713-accc-1eb8dc21278b\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.024484 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8e0c9cd-c99c-4713-accc-1eb8dc21278b-proxy-ca-bundles\") pod \"controller-manager-86f8dcd44f-lkq4b\" (UID: \"d8e0c9cd-c99c-4713-accc-1eb8dc21278b\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.025732 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8e0c9cd-c99c-4713-accc-1eb8dc21278b-client-ca\") pod \"controller-manager-86f8dcd44f-lkq4b\" (UID: \"d8e0c9cd-c99c-4713-accc-1eb8dc21278b\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.027571 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f22f8e31-8486-440d-895a-3f7d41b204bc-serving-cert\") pod \"route-controller-manager-7db789c79-jjv4m\" (UID: \"f22f8e31-8486-440d-895a-3f7d41b204bc\") " pod="openshift-route-controller-manager/route-controller-manager-7db789c79-jjv4m" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.027631 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8e0c9cd-c99c-4713-accc-1eb8dc21278b-serving-cert\") pod \"controller-manager-86f8dcd44f-lkq4b\" (UID: \"d8e0c9cd-c99c-4713-accc-1eb8dc21278b\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.051472 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk2lj\" (UniqueName: \"kubernetes.io/projected/f22f8e31-8486-440d-895a-3f7d41b204bc-kube-api-access-rk2lj\") pod \"route-controller-manager-7db789c79-jjv4m\" (UID: \"f22f8e31-8486-440d-895a-3f7d41b204bc\") " pod="openshift-route-controller-manager/route-controller-manager-7db789c79-jjv4m" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.056318 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwpxs\" (UniqueName: \"kubernetes.io/projected/d8e0c9cd-c99c-4713-accc-1eb8dc21278b-kube-api-access-dwpxs\") pod \"controller-manager-86f8dcd44f-lkq4b\" (UID: \"d8e0c9cd-c99c-4713-accc-1eb8dc21278b\") " pod="openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.121111 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.142444 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7db789c79-jjv4m" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.370722 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b"] Jan 28 15:22:51 crc kubenswrapper[4871]: W0128 15:22:51.380723 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8e0c9cd_c99c_4713_accc_1eb8dc21278b.slice/crio-4bc45dc2eaf4fb9606a19fdfaf3650dffdd3b60bc6e14eec2ab10ae99f92bf1e WatchSource:0}: Error finding container 4bc45dc2eaf4fb9606a19fdfaf3650dffdd3b60bc6e14eec2ab10ae99f92bf1e: Status 404 returned error can't find the container with id 4bc45dc2eaf4fb9606a19fdfaf3650dffdd3b60bc6e14eec2ab10ae99f92bf1e Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.411985 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7db789c79-jjv4m"] Jan 28 15:22:51 crc kubenswrapper[4871]: W0128 15:22:51.436691 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf22f8e31_8486_440d_895a_3f7d41b204bc.slice/crio-cb7f30474a05b2ba1c849ae0ce528548c5d572f1c80dd66a1444726ed1277c39 WatchSource:0}: Error finding container cb7f30474a05b2ba1c849ae0ce528548c5d572f1c80dd66a1444726ed1277c39: Status 404 returned error can't find the container with id cb7f30474a05b2ba1c849ae0ce528548c5d572f1c80dd66a1444726ed1277c39 Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.759018 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7db789c79-jjv4m" event={"ID":"f22f8e31-8486-440d-895a-3f7d41b204bc","Type":"ContainerStarted","Data":"073ad72be1a85560697c11ecd20191551fa25191c322363eb33654bd4ef52dbc"} Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.759073 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7db789c79-jjv4m" event={"ID":"f22f8e31-8486-440d-895a-3f7d41b204bc","Type":"ContainerStarted","Data":"cb7f30474a05b2ba1c849ae0ce528548c5d572f1c80dd66a1444726ed1277c39"} Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.759469 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7db789c79-jjv4m" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.760562 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b" event={"ID":"d8e0c9cd-c99c-4713-accc-1eb8dc21278b","Type":"ContainerStarted","Data":"32ba8da1bab0d1b4cba26a9f709092e95e263eb495cf140f50b2967951052e78"} Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.760628 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b" event={"ID":"d8e0c9cd-c99c-4713-accc-1eb8dc21278b","Type":"ContainerStarted","Data":"4bc45dc2eaf4fb9606a19fdfaf3650dffdd3b60bc6e14eec2ab10ae99f92bf1e"} Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.760821 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.768267 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.782444 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7db789c79-jjv4m" podStartSLOduration=3.782424325 podStartE2EDuration="3.782424325s" podCreationTimestamp="2026-01-28 15:22:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:22:51.779564773 +0000 UTC m=+323.675403105" watchObservedRunningTime="2026-01-28 15:22:51.782424325 +0000 UTC m=+323.678262667" Jan 28 15:22:51 crc kubenswrapper[4871]: I0128 15:22:51.800560 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86f8dcd44f-lkq4b" podStartSLOduration=3.800542774 podStartE2EDuration="3.800542774s" podCreationTimestamp="2026-01-28 15:22:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:22:51.798337283 +0000 UTC m=+323.694175635" watchObservedRunningTime="2026-01-28 15:22:51.800542774 +0000 UTC m=+323.696381096" Jan 28 15:22:52 crc kubenswrapper[4871]: I0128 15:22:52.112958 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7db789c79-jjv4m" Jan 28 15:22:52 crc kubenswrapper[4871]: E0128 15:22:52.657000 4871 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5226d39_f16c_4e81_8ae2_8d5f54a8a683.slice/crio-2bd4944a1a5d7c2403586d326a0845dd3f136102ea1a011df35342c707489c08.scope\": RecentStats: unable to find data in memory cache]" Jan 28 15:23:02 crc kubenswrapper[4871]: E0128 15:23:02.767722 4871 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5226d39_f16c_4e81_8ae2_8d5f54a8a683.slice/crio-2bd4944a1a5d7c2403586d326a0845dd3f136102ea1a011df35342c707489c08.scope\": RecentStats: unable to find data in memory cache]" Jan 28 15:23:12 crc kubenswrapper[4871]: E0128 15:23:12.922795 4871 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5226d39_f16c_4e81_8ae2_8d5f54a8a683.slice/crio-2bd4944a1a5d7c2403586d326a0845dd3f136102ea1a011df35342c707489c08.scope\": RecentStats: unable to find data in memory cache]" Jan 28 15:23:23 crc kubenswrapper[4871]: E0128 15:23:23.054561 4871 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5226d39_f16c_4e81_8ae2_8d5f54a8a683.slice/crio-2bd4944a1a5d7c2403586d326a0845dd3f136102ea1a011df35342c707489c08.scope\": RecentStats: unable to find data in memory cache]" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.133944 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mnmsx"] Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.137857 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.138803 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mnmsx"] Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.318621 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d3ae9011-31fe-4d0e-966c-634d9dbc3b90-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mnmsx\" (UID: \"d3ae9011-31fe-4d0e-966c-634d9dbc3b90\") " pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.318681 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmtdf\" (UniqueName: \"kubernetes.io/projected/d3ae9011-31fe-4d0e-966c-634d9dbc3b90-kube-api-access-lmtdf\") pod \"image-registry-66df7c8f76-mnmsx\" (UID: \"d3ae9011-31fe-4d0e-966c-634d9dbc3b90\") " pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.318760 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d3ae9011-31fe-4d0e-966c-634d9dbc3b90-registry-tls\") pod \"image-registry-66df7c8f76-mnmsx\" (UID: \"d3ae9011-31fe-4d0e-966c-634d9dbc3b90\") " pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.318792 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3ae9011-31fe-4d0e-966c-634d9dbc3b90-trusted-ca\") pod \"image-registry-66df7c8f76-mnmsx\" (UID: \"d3ae9011-31fe-4d0e-966c-634d9dbc3b90\") " pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.318834 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mnmsx\" (UID: \"d3ae9011-31fe-4d0e-966c-634d9dbc3b90\") " pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.318857 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3ae9011-31fe-4d0e-966c-634d9dbc3b90-bound-sa-token\") pod \"image-registry-66df7c8f76-mnmsx\" (UID: \"d3ae9011-31fe-4d0e-966c-634d9dbc3b90\") " pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.318885 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d3ae9011-31fe-4d0e-966c-634d9dbc3b90-registry-certificates\") pod \"image-registry-66df7c8f76-mnmsx\" (UID: \"d3ae9011-31fe-4d0e-966c-634d9dbc3b90\") " pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.318904 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d3ae9011-31fe-4d0e-966c-634d9dbc3b90-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mnmsx\" (UID: \"d3ae9011-31fe-4d0e-966c-634d9dbc3b90\") " pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.358493 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mnmsx\" (UID: \"d3ae9011-31fe-4d0e-966c-634d9dbc3b90\") " pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.420071 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3ae9011-31fe-4d0e-966c-634d9dbc3b90-bound-sa-token\") pod \"image-registry-66df7c8f76-mnmsx\" (UID: \"d3ae9011-31fe-4d0e-966c-634d9dbc3b90\") " pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.420135 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d3ae9011-31fe-4d0e-966c-634d9dbc3b90-registry-certificates\") pod \"image-registry-66df7c8f76-mnmsx\" (UID: \"d3ae9011-31fe-4d0e-966c-634d9dbc3b90\") " pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.420163 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d3ae9011-31fe-4d0e-966c-634d9dbc3b90-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mnmsx\" (UID: \"d3ae9011-31fe-4d0e-966c-634d9dbc3b90\") " pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.420219 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d3ae9011-31fe-4d0e-966c-634d9dbc3b90-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mnmsx\" (UID: \"d3ae9011-31fe-4d0e-966c-634d9dbc3b90\") " pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.420244 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmtdf\" (UniqueName: \"kubernetes.io/projected/d3ae9011-31fe-4d0e-966c-634d9dbc3b90-kube-api-access-lmtdf\") pod \"image-registry-66df7c8f76-mnmsx\" (UID: \"d3ae9011-31fe-4d0e-966c-634d9dbc3b90\") " pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.420285 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d3ae9011-31fe-4d0e-966c-634d9dbc3b90-registry-tls\") pod \"image-registry-66df7c8f76-mnmsx\" (UID: \"d3ae9011-31fe-4d0e-966c-634d9dbc3b90\") " pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.420311 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3ae9011-31fe-4d0e-966c-634d9dbc3b90-trusted-ca\") pod \"image-registry-66df7c8f76-mnmsx\" (UID: \"d3ae9011-31fe-4d0e-966c-634d9dbc3b90\") " pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.421021 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d3ae9011-31fe-4d0e-966c-634d9dbc3b90-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mnmsx\" (UID: \"d3ae9011-31fe-4d0e-966c-634d9dbc3b90\") " pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.422145 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d3ae9011-31fe-4d0e-966c-634d9dbc3b90-registry-certificates\") pod \"image-registry-66df7c8f76-mnmsx\" (UID: \"d3ae9011-31fe-4d0e-966c-634d9dbc3b90\") " pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.422403 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3ae9011-31fe-4d0e-966c-634d9dbc3b90-trusted-ca\") pod \"image-registry-66df7c8f76-mnmsx\" (UID: \"d3ae9011-31fe-4d0e-966c-634d9dbc3b90\") " pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.426794 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d3ae9011-31fe-4d0e-966c-634d9dbc3b90-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mnmsx\" (UID: \"d3ae9011-31fe-4d0e-966c-634d9dbc3b90\") " pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.427068 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d3ae9011-31fe-4d0e-966c-634d9dbc3b90-registry-tls\") pod \"image-registry-66df7c8f76-mnmsx\" (UID: \"d3ae9011-31fe-4d0e-966c-634d9dbc3b90\") " pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.439937 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmtdf\" (UniqueName: \"kubernetes.io/projected/d3ae9011-31fe-4d0e-966c-634d9dbc3b90-kube-api-access-lmtdf\") pod \"image-registry-66df7c8f76-mnmsx\" (UID: \"d3ae9011-31fe-4d0e-966c-634d9dbc3b90\") " pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.440047 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3ae9011-31fe-4d0e-966c-634d9dbc3b90-bound-sa-token\") pod \"image-registry-66df7c8f76-mnmsx\" (UID: \"d3ae9011-31fe-4d0e-966c-634d9dbc3b90\") " pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.463854 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.894365 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mnmsx"] Jan 28 15:23:29 crc kubenswrapper[4871]: I0128 15:23:29.981808 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" event={"ID":"d3ae9011-31fe-4d0e-966c-634d9dbc3b90","Type":"ContainerStarted","Data":"c78892b12330cebc34037d752eeaacf8beb53e6a50490fcf147551d246978957"} Jan 28 15:23:30 crc kubenswrapper[4871]: I0128 15:23:30.989135 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" event={"ID":"d3ae9011-31fe-4d0e-966c-634d9dbc3b90","Type":"ContainerStarted","Data":"fd8e221d9a773881533985184bfea67ac8337fc5d277875e6a6de2f35a439320"} Jan 28 15:23:30 crc kubenswrapper[4871]: I0128 15:23:30.990331 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:31 crc kubenswrapper[4871]: I0128 15:23:31.017115 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" podStartSLOduration=2.017093003 podStartE2EDuration="2.017093003s" podCreationTimestamp="2026-01-28 15:23:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:23:31.012425995 +0000 UTC m=+362.908264347" watchObservedRunningTime="2026-01-28 15:23:31.017093003 +0000 UTC m=+362.912931325" Jan 28 15:23:43 crc kubenswrapper[4871]: I0128 15:23:43.813825 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:23:43 crc kubenswrapper[4871]: I0128 15:23:43.815691 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:23:47 crc kubenswrapper[4871]: I0128 15:23:47.984424 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mjks9"] Jan 28 15:23:47 crc kubenswrapper[4871]: I0128 15:23:47.985441 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mjks9" podUID="e209b4f2-aaea-496b-8e14-58f2aa8faaa5" containerName="registry-server" containerID="cri-o://0b16713e8946b700c358d9b1c6eb1db8567dd58bf9276af91cfe20a9bbd97011" gracePeriod=30 Jan 28 15:23:47 crc kubenswrapper[4871]: I0128 15:23:47.991946 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g7btj"] Jan 28 15:23:47 crc kubenswrapper[4871]: I0128 15:23:47.992190 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g7btj" podUID="9b83a5ec-9ed6-4e66-9a39-610a39f64d19" containerName="registry-server" containerID="cri-o://40065d911288088f9fb363f8e4e01affe9a6970d5d2c6b4be3a1f393cc9f9efc" gracePeriod=30 Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.006004 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ql229"] Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.006236 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-ql229" podUID="48a7be4a-2d1b-4b46-a720-4068e3fad906" containerName="marketplace-operator" containerID="cri-o://0c3e1b0feb63457dbffe3326079e906f6b122a98866b8dfdcb312fac2b70e47b" gracePeriod=30 Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.017538 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9dhs"] Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.017882 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j9dhs" podUID="6575ba24-dfe2-4f55-96ee-6692928debdd" containerName="registry-server" containerID="cri-o://1a3764765ae722333d10f0e1d375a7e73470d0cea68ac126b26f4db0d7353731" gracePeriod=30 Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.021032 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xdhpp"] Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.021306 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xdhpp" podUID="9524a883-d083-471d-8c30-866172b8456e" containerName="registry-server" containerID="cri-o://084ed470cb799b27a8f7973a6e34c789f91540e741c821fcabca71b27d30ddb4" gracePeriod=30 Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.032391 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x5bm6"] Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.033150 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-x5bm6" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.047462 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x5bm6"] Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.182413 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9487509e-3495-440b-9698-6669be6a0d5a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-x5bm6\" (UID: \"9487509e-3495-440b-9698-6669be6a0d5a\") " pod="openshift-marketplace/marketplace-operator-79b997595-x5bm6" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.184787 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9487509e-3495-440b-9698-6669be6a0d5a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-x5bm6\" (UID: \"9487509e-3495-440b-9698-6669be6a0d5a\") " pod="openshift-marketplace/marketplace-operator-79b997595-x5bm6" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.184869 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcf6k\" (UniqueName: \"kubernetes.io/projected/9487509e-3495-440b-9698-6669be6a0d5a-kube-api-access-mcf6k\") pod \"marketplace-operator-79b997595-x5bm6\" (UID: \"9487509e-3495-440b-9698-6669be6a0d5a\") " pod="openshift-marketplace/marketplace-operator-79b997595-x5bm6" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.286207 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9487509e-3495-440b-9698-6669be6a0d5a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-x5bm6\" (UID: \"9487509e-3495-440b-9698-6669be6a0d5a\") " pod="openshift-marketplace/marketplace-operator-79b997595-x5bm6" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.286261 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9487509e-3495-440b-9698-6669be6a0d5a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-x5bm6\" (UID: \"9487509e-3495-440b-9698-6669be6a0d5a\") " pod="openshift-marketplace/marketplace-operator-79b997595-x5bm6" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.286306 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcf6k\" (UniqueName: \"kubernetes.io/projected/9487509e-3495-440b-9698-6669be6a0d5a-kube-api-access-mcf6k\") pod \"marketplace-operator-79b997595-x5bm6\" (UID: \"9487509e-3495-440b-9698-6669be6a0d5a\") " pod="openshift-marketplace/marketplace-operator-79b997595-x5bm6" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.287380 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9487509e-3495-440b-9698-6669be6a0d5a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-x5bm6\" (UID: \"9487509e-3495-440b-9698-6669be6a0d5a\") " pod="openshift-marketplace/marketplace-operator-79b997595-x5bm6" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.297417 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9487509e-3495-440b-9698-6669be6a0d5a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-x5bm6\" (UID: \"9487509e-3495-440b-9698-6669be6a0d5a\") " pod="openshift-marketplace/marketplace-operator-79b997595-x5bm6" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.311839 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcf6k\" (UniqueName: \"kubernetes.io/projected/9487509e-3495-440b-9698-6669be6a0d5a-kube-api-access-mcf6k\") pod \"marketplace-operator-79b997595-x5bm6\" (UID: \"9487509e-3495-440b-9698-6669be6a0d5a\") " pod="openshift-marketplace/marketplace-operator-79b997595-x5bm6" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.352241 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-x5bm6" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.438935 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g7btj" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.476995 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mjks9" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.567859 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ql229" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.575992 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xdhpp" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.586366 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j9dhs" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.592022 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e209b4f2-aaea-496b-8e14-58f2aa8faaa5-utilities\") pod \"e209b4f2-aaea-496b-8e14-58f2aa8faaa5\" (UID: \"e209b4f2-aaea-496b-8e14-58f2aa8faaa5\") " Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.592061 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qsb8\" (UniqueName: \"kubernetes.io/projected/e209b4f2-aaea-496b-8e14-58f2aa8faaa5-kube-api-access-4qsb8\") pod \"e209b4f2-aaea-496b-8e14-58f2aa8faaa5\" (UID: \"e209b4f2-aaea-496b-8e14-58f2aa8faaa5\") " Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.592091 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b83a5ec-9ed6-4e66-9a39-610a39f64d19-utilities\") pod \"9b83a5ec-9ed6-4e66-9a39-610a39f64d19\" (UID: \"9b83a5ec-9ed6-4e66-9a39-610a39f64d19\") " Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.592124 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfvtx\" (UniqueName: \"kubernetes.io/projected/9b83a5ec-9ed6-4e66-9a39-610a39f64d19-kube-api-access-tfvtx\") pod \"9b83a5ec-9ed6-4e66-9a39-610a39f64d19\" (UID: \"9b83a5ec-9ed6-4e66-9a39-610a39f64d19\") " Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.592162 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b83a5ec-9ed6-4e66-9a39-610a39f64d19-catalog-content\") pod \"9b83a5ec-9ed6-4e66-9a39-610a39f64d19\" (UID: \"9b83a5ec-9ed6-4e66-9a39-610a39f64d19\") " Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.592201 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e209b4f2-aaea-496b-8e14-58f2aa8faaa5-catalog-content\") pod \"e209b4f2-aaea-496b-8e14-58f2aa8faaa5\" (UID: \"e209b4f2-aaea-496b-8e14-58f2aa8faaa5\") " Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.593562 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b83a5ec-9ed6-4e66-9a39-610a39f64d19-utilities" (OuterVolumeSpecName: "utilities") pod "9b83a5ec-9ed6-4e66-9a39-610a39f64d19" (UID: "9b83a5ec-9ed6-4e66-9a39-610a39f64d19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.593946 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e209b4f2-aaea-496b-8e14-58f2aa8faaa5-utilities" (OuterVolumeSpecName: "utilities") pod "e209b4f2-aaea-496b-8e14-58f2aa8faaa5" (UID: "e209b4f2-aaea-496b-8e14-58f2aa8faaa5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.595537 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e209b4f2-aaea-496b-8e14-58f2aa8faaa5-kube-api-access-4qsb8" (OuterVolumeSpecName: "kube-api-access-4qsb8") pod "e209b4f2-aaea-496b-8e14-58f2aa8faaa5" (UID: "e209b4f2-aaea-496b-8e14-58f2aa8faaa5"). InnerVolumeSpecName "kube-api-access-4qsb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.596096 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b83a5ec-9ed6-4e66-9a39-610a39f64d19-kube-api-access-tfvtx" (OuterVolumeSpecName: "kube-api-access-tfvtx") pod "9b83a5ec-9ed6-4e66-9a39-610a39f64d19" (UID: "9b83a5ec-9ed6-4e66-9a39-610a39f64d19"). InnerVolumeSpecName "kube-api-access-tfvtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.627878 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x5bm6"] Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.672316 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b83a5ec-9ed6-4e66-9a39-610a39f64d19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b83a5ec-9ed6-4e66-9a39-610a39f64d19" (UID: "9b83a5ec-9ed6-4e66-9a39-610a39f64d19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.677098 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e209b4f2-aaea-496b-8e14-58f2aa8faaa5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e209b4f2-aaea-496b-8e14-58f2aa8faaa5" (UID: "e209b4f2-aaea-496b-8e14-58f2aa8faaa5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.693532 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48a7be4a-2d1b-4b46-a720-4068e3fad906-marketplace-trusted-ca\") pod \"48a7be4a-2d1b-4b46-a720-4068e3fad906\" (UID: \"48a7be4a-2d1b-4b46-a720-4068e3fad906\") " Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.693644 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9524a883-d083-471d-8c30-866172b8456e-utilities\") pod \"9524a883-d083-471d-8c30-866172b8456e\" (UID: \"9524a883-d083-471d-8c30-866172b8456e\") " Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.693679 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcrhx\" (UniqueName: \"kubernetes.io/projected/9524a883-d083-471d-8c30-866172b8456e-kube-api-access-fcrhx\") pod \"9524a883-d083-471d-8c30-866172b8456e\" (UID: \"9524a883-d083-471d-8c30-866172b8456e\") " Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.693785 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6575ba24-dfe2-4f55-96ee-6692928debdd-utilities\") pod \"6575ba24-dfe2-4f55-96ee-6692928debdd\" (UID: \"6575ba24-dfe2-4f55-96ee-6692928debdd\") " Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.693816 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9524a883-d083-471d-8c30-866172b8456e-catalog-content\") pod \"9524a883-d083-471d-8c30-866172b8456e\" (UID: \"9524a883-d083-471d-8c30-866172b8456e\") " Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.693839 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6575ba24-dfe2-4f55-96ee-6692928debdd-catalog-content\") pod \"6575ba24-dfe2-4f55-96ee-6692928debdd\" (UID: \"6575ba24-dfe2-4f55-96ee-6692928debdd\") " Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.693862 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm2hd\" (UniqueName: \"kubernetes.io/projected/6575ba24-dfe2-4f55-96ee-6692928debdd-kube-api-access-xm2hd\") pod \"6575ba24-dfe2-4f55-96ee-6692928debdd\" (UID: \"6575ba24-dfe2-4f55-96ee-6692928debdd\") " Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.693885 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpb9p\" (UniqueName: \"kubernetes.io/projected/48a7be4a-2d1b-4b46-a720-4068e3fad906-kube-api-access-gpb9p\") pod \"48a7be4a-2d1b-4b46-a720-4068e3fad906\" (UID: \"48a7be4a-2d1b-4b46-a720-4068e3fad906\") " Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.693909 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48a7be4a-2d1b-4b46-a720-4068e3fad906-marketplace-operator-metrics\") pod \"48a7be4a-2d1b-4b46-a720-4068e3fad906\" (UID: \"48a7be4a-2d1b-4b46-a720-4068e3fad906\") " Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.694158 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e209b4f2-aaea-496b-8e14-58f2aa8faaa5-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.694181 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qsb8\" (UniqueName: \"kubernetes.io/projected/e209b4f2-aaea-496b-8e14-58f2aa8faaa5-kube-api-access-4qsb8\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.694196 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b83a5ec-9ed6-4e66-9a39-610a39f64d19-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.694210 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfvtx\" (UniqueName: \"kubernetes.io/projected/9b83a5ec-9ed6-4e66-9a39-610a39f64d19-kube-api-access-tfvtx\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.694222 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b83a5ec-9ed6-4e66-9a39-610a39f64d19-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.694232 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e209b4f2-aaea-496b-8e14-58f2aa8faaa5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.698971 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9524a883-d083-471d-8c30-866172b8456e-utilities" (OuterVolumeSpecName: "utilities") pod "9524a883-d083-471d-8c30-866172b8456e" (UID: "9524a883-d083-471d-8c30-866172b8456e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.699551 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48a7be4a-2d1b-4b46-a720-4068e3fad906-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "48a7be4a-2d1b-4b46-a720-4068e3fad906" (UID: "48a7be4a-2d1b-4b46-a720-4068e3fad906"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.701007 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48a7be4a-2d1b-4b46-a720-4068e3fad906-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "48a7be4a-2d1b-4b46-a720-4068e3fad906" (UID: "48a7be4a-2d1b-4b46-a720-4068e3fad906"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.701179 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48a7be4a-2d1b-4b46-a720-4068e3fad906-kube-api-access-gpb9p" (OuterVolumeSpecName: "kube-api-access-gpb9p") pod "48a7be4a-2d1b-4b46-a720-4068e3fad906" (UID: "48a7be4a-2d1b-4b46-a720-4068e3fad906"). InnerVolumeSpecName "kube-api-access-gpb9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.702415 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9524a883-d083-471d-8c30-866172b8456e-kube-api-access-fcrhx" (OuterVolumeSpecName: "kube-api-access-fcrhx") pod "9524a883-d083-471d-8c30-866172b8456e" (UID: "9524a883-d083-471d-8c30-866172b8456e"). InnerVolumeSpecName "kube-api-access-fcrhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.702846 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6575ba24-dfe2-4f55-96ee-6692928debdd-kube-api-access-xm2hd" (OuterVolumeSpecName: "kube-api-access-xm2hd") pod "6575ba24-dfe2-4f55-96ee-6692928debdd" (UID: "6575ba24-dfe2-4f55-96ee-6692928debdd"). InnerVolumeSpecName "kube-api-access-xm2hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.705331 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6575ba24-dfe2-4f55-96ee-6692928debdd-utilities" (OuterVolumeSpecName: "utilities") pod "6575ba24-dfe2-4f55-96ee-6692928debdd" (UID: "6575ba24-dfe2-4f55-96ee-6692928debdd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.726002 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6575ba24-dfe2-4f55-96ee-6692928debdd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6575ba24-dfe2-4f55-96ee-6692928debdd" (UID: "6575ba24-dfe2-4f55-96ee-6692928debdd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.795470 4871 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48a7be4a-2d1b-4b46-a720-4068e3fad906-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.795836 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9524a883-d083-471d-8c30-866172b8456e-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.795849 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcrhx\" (UniqueName: \"kubernetes.io/projected/9524a883-d083-471d-8c30-866172b8456e-kube-api-access-fcrhx\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.795858 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6575ba24-dfe2-4f55-96ee-6692928debdd-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.795867 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6575ba24-dfe2-4f55-96ee-6692928debdd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.795875 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xm2hd\" (UniqueName: \"kubernetes.io/projected/6575ba24-dfe2-4f55-96ee-6692928debdd-kube-api-access-xm2hd\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.795882 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpb9p\" (UniqueName: \"kubernetes.io/projected/48a7be4a-2d1b-4b46-a720-4068e3fad906-kube-api-access-gpb9p\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.795890 4871 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48a7be4a-2d1b-4b46-a720-4068e3fad906-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.816929 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9524a883-d083-471d-8c30-866172b8456e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9524a883-d083-471d-8c30-866172b8456e" (UID: "9524a883-d083-471d-8c30-866172b8456e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:23:48 crc kubenswrapper[4871]: I0128 15:23:48.897057 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9524a883-d083-471d-8c30-866172b8456e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.095453 4871 generic.go:334] "Generic (PLEG): container finished" podID="9b83a5ec-9ed6-4e66-9a39-610a39f64d19" containerID="40065d911288088f9fb363f8e4e01affe9a6970d5d2c6b4be3a1f393cc9f9efc" exitCode=0 Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.095544 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7btj" event={"ID":"9b83a5ec-9ed6-4e66-9a39-610a39f64d19","Type":"ContainerDied","Data":"40065d911288088f9fb363f8e4e01affe9a6970d5d2c6b4be3a1f393cc9f9efc"} Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.095620 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7btj" event={"ID":"9b83a5ec-9ed6-4e66-9a39-610a39f64d19","Type":"ContainerDied","Data":"5926a38b3856a353d17e05aa3104096bd6083f0c25411f14dc5494d2a3f9a5c1"} Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.095643 4871 scope.go:117] "RemoveContainer" containerID="40065d911288088f9fb363f8e4e01affe9a6970d5d2c6b4be3a1f393cc9f9efc" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.095665 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g7btj" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.097563 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-x5bm6" event={"ID":"9487509e-3495-440b-9698-6669be6a0d5a","Type":"ContainerStarted","Data":"2a84514320bda2a730017818f05efee2a2a8b18d8164017b0e12b52aa6b6172b"} Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.097637 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-x5bm6" event={"ID":"9487509e-3495-440b-9698-6669be6a0d5a","Type":"ContainerStarted","Data":"560a2481c71f0002dc395a2fa823a3839ce30fefa11f485d3bffc6190be76409"} Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.097845 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-x5bm6" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.102955 4871 generic.go:334] "Generic (PLEG): container finished" podID="9524a883-d083-471d-8c30-866172b8456e" containerID="084ed470cb799b27a8f7973a6e34c789f91540e741c821fcabca71b27d30ddb4" exitCode=0 Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.103044 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xdhpp" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.103107 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdhpp" event={"ID":"9524a883-d083-471d-8c30-866172b8456e","Type":"ContainerDied","Data":"084ed470cb799b27a8f7973a6e34c789f91540e741c821fcabca71b27d30ddb4"} Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.103142 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdhpp" event={"ID":"9524a883-d083-471d-8c30-866172b8456e","Type":"ContainerDied","Data":"6c08175738eb3d7af5bda765afa4f9db8ada5c4bb65e567f1ba9dcc5f6a85355"} Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.105826 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-x5bm6" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.118177 4871 generic.go:334] "Generic (PLEG): container finished" podID="e209b4f2-aaea-496b-8e14-58f2aa8faaa5" containerID="0b16713e8946b700c358d9b1c6eb1db8567dd58bf9276af91cfe20a9bbd97011" exitCode=0 Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.118253 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mjks9" event={"ID":"e209b4f2-aaea-496b-8e14-58f2aa8faaa5","Type":"ContainerDied","Data":"0b16713e8946b700c358d9b1c6eb1db8567dd58bf9276af91cfe20a9bbd97011"} Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.118287 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mjks9" event={"ID":"e209b4f2-aaea-496b-8e14-58f2aa8faaa5","Type":"ContainerDied","Data":"c41f98c5d9a96e2501dc17456036876b945f30296717fe2a294c5c87d4c76735"} Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.118359 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mjks9" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.121217 4871 scope.go:117] "RemoveContainer" containerID="30cd828a28c7ef5585485a7c4743d26c2c76c8c4e4bfc61cf754c927935c966f" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.125640 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-x5bm6" podStartSLOduration=1.125619412 podStartE2EDuration="1.125619412s" podCreationTimestamp="2026-01-28 15:23:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:23:49.12488445 +0000 UTC m=+381.020722772" watchObservedRunningTime="2026-01-28 15:23:49.125619412 +0000 UTC m=+381.021457734" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.131079 4871 generic.go:334] "Generic (PLEG): container finished" podID="6575ba24-dfe2-4f55-96ee-6692928debdd" containerID="1a3764765ae722333d10f0e1d375a7e73470d0cea68ac126b26f4db0d7353731" exitCode=0 Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.131164 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9dhs" event={"ID":"6575ba24-dfe2-4f55-96ee-6692928debdd","Type":"ContainerDied","Data":"1a3764765ae722333d10f0e1d375a7e73470d0cea68ac126b26f4db0d7353731"} Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.131198 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9dhs" event={"ID":"6575ba24-dfe2-4f55-96ee-6692928debdd","Type":"ContainerDied","Data":"6f19e3ff60312f2f7e9be3411b1b02006cc948608b6f1adc92e6222513a6f127"} Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.131243 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j9dhs" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.139905 4871 generic.go:334] "Generic (PLEG): container finished" podID="48a7be4a-2d1b-4b46-a720-4068e3fad906" containerID="0c3e1b0feb63457dbffe3326079e906f6b122a98866b8dfdcb312fac2b70e47b" exitCode=0 Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.139950 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ql229" event={"ID":"48a7be4a-2d1b-4b46-a720-4068e3fad906","Type":"ContainerDied","Data":"0c3e1b0feb63457dbffe3326079e906f6b122a98866b8dfdcb312fac2b70e47b"} Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.139978 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ql229" event={"ID":"48a7be4a-2d1b-4b46-a720-4068e3fad906","Type":"ContainerDied","Data":"4a27aa4ea18e42781af15fd93078a981f24f22e9a64efaff0b9142eb1c38a363"} Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.140011 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ql229" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.148472 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g7btj"] Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.153344 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g7btj"] Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.177379 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xdhpp"] Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.181122 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xdhpp"] Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.184170 4871 scope.go:117] "RemoveContainer" containerID="a5ee91f3edce66441308abbfcfa3536bf1f8ddfc969956dacd322133350d6e52" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.198402 4871 scope.go:117] "RemoveContainer" containerID="40065d911288088f9fb363f8e4e01affe9a6970d5d2c6b4be3a1f393cc9f9efc" Jan 28 15:23:49 crc kubenswrapper[4871]: E0128 15:23:49.198866 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40065d911288088f9fb363f8e4e01affe9a6970d5d2c6b4be3a1f393cc9f9efc\": container with ID starting with 40065d911288088f9fb363f8e4e01affe9a6970d5d2c6b4be3a1f393cc9f9efc not found: ID does not exist" containerID="40065d911288088f9fb363f8e4e01affe9a6970d5d2c6b4be3a1f393cc9f9efc" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.198903 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40065d911288088f9fb363f8e4e01affe9a6970d5d2c6b4be3a1f393cc9f9efc"} err="failed to get container status \"40065d911288088f9fb363f8e4e01affe9a6970d5d2c6b4be3a1f393cc9f9efc\": rpc error: code = NotFound desc = could not find container \"40065d911288088f9fb363f8e4e01affe9a6970d5d2c6b4be3a1f393cc9f9efc\": container with ID starting with 40065d911288088f9fb363f8e4e01affe9a6970d5d2c6b4be3a1f393cc9f9efc not found: ID does not exist" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.198931 4871 scope.go:117] "RemoveContainer" containerID="30cd828a28c7ef5585485a7c4743d26c2c76c8c4e4bfc61cf754c927935c966f" Jan 28 15:23:49 crc kubenswrapper[4871]: E0128 15:23:49.199189 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30cd828a28c7ef5585485a7c4743d26c2c76c8c4e4bfc61cf754c927935c966f\": container with ID starting with 30cd828a28c7ef5585485a7c4743d26c2c76c8c4e4bfc61cf754c927935c966f not found: ID does not exist" containerID="30cd828a28c7ef5585485a7c4743d26c2c76c8c4e4bfc61cf754c927935c966f" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.199217 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30cd828a28c7ef5585485a7c4743d26c2c76c8c4e4bfc61cf754c927935c966f"} err="failed to get container status \"30cd828a28c7ef5585485a7c4743d26c2c76c8c4e4bfc61cf754c927935c966f\": rpc error: code = NotFound desc = could not find container \"30cd828a28c7ef5585485a7c4743d26c2c76c8c4e4bfc61cf754c927935c966f\": container with ID starting with 30cd828a28c7ef5585485a7c4743d26c2c76c8c4e4bfc61cf754c927935c966f not found: ID does not exist" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.199236 4871 scope.go:117] "RemoveContainer" containerID="a5ee91f3edce66441308abbfcfa3536bf1f8ddfc969956dacd322133350d6e52" Jan 28 15:23:49 crc kubenswrapper[4871]: E0128 15:23:49.199454 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5ee91f3edce66441308abbfcfa3536bf1f8ddfc969956dacd322133350d6e52\": container with ID starting with a5ee91f3edce66441308abbfcfa3536bf1f8ddfc969956dacd322133350d6e52 not found: ID does not exist" containerID="a5ee91f3edce66441308abbfcfa3536bf1f8ddfc969956dacd322133350d6e52" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.199476 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5ee91f3edce66441308abbfcfa3536bf1f8ddfc969956dacd322133350d6e52"} err="failed to get container status \"a5ee91f3edce66441308abbfcfa3536bf1f8ddfc969956dacd322133350d6e52\": rpc error: code = NotFound desc = could not find container \"a5ee91f3edce66441308abbfcfa3536bf1f8ddfc969956dacd322133350d6e52\": container with ID starting with a5ee91f3edce66441308abbfcfa3536bf1f8ddfc969956dacd322133350d6e52 not found: ID does not exist" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.199487 4871 scope.go:117] "RemoveContainer" containerID="084ed470cb799b27a8f7973a6e34c789f91540e741c821fcabca71b27d30ddb4" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.223468 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mjks9"] Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.223715 4871 scope.go:117] "RemoveContainer" containerID="f60d614295b1b9292926fcddb53c4d0c9e9e4eb0c70263ea02518ccca6085f6a" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.235749 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mjks9"] Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.248147 4871 scope.go:117] "RemoveContainer" containerID="4211dd4d67ec640f0657d91b5179acf5ee27c54fb03a1e66f3304ac1f93364cf" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.248420 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ql229"] Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.253335 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ql229"] Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.257429 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9dhs"] Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.261111 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9dhs"] Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.272982 4871 scope.go:117] "RemoveContainer" containerID="084ed470cb799b27a8f7973a6e34c789f91540e741c821fcabca71b27d30ddb4" Jan 28 15:23:49 crc kubenswrapper[4871]: E0128 15:23:49.273416 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"084ed470cb799b27a8f7973a6e34c789f91540e741c821fcabca71b27d30ddb4\": container with ID starting with 084ed470cb799b27a8f7973a6e34c789f91540e741c821fcabca71b27d30ddb4 not found: ID does not exist" containerID="084ed470cb799b27a8f7973a6e34c789f91540e741c821fcabca71b27d30ddb4" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.273443 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"084ed470cb799b27a8f7973a6e34c789f91540e741c821fcabca71b27d30ddb4"} err="failed to get container status \"084ed470cb799b27a8f7973a6e34c789f91540e741c821fcabca71b27d30ddb4\": rpc error: code = NotFound desc = could not find container \"084ed470cb799b27a8f7973a6e34c789f91540e741c821fcabca71b27d30ddb4\": container with ID starting with 084ed470cb799b27a8f7973a6e34c789f91540e741c821fcabca71b27d30ddb4 not found: ID does not exist" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.273464 4871 scope.go:117] "RemoveContainer" containerID="f60d614295b1b9292926fcddb53c4d0c9e9e4eb0c70263ea02518ccca6085f6a" Jan 28 15:23:49 crc kubenswrapper[4871]: E0128 15:23:49.273673 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f60d614295b1b9292926fcddb53c4d0c9e9e4eb0c70263ea02518ccca6085f6a\": container with ID starting with f60d614295b1b9292926fcddb53c4d0c9e9e4eb0c70263ea02518ccca6085f6a not found: ID does not exist" containerID="f60d614295b1b9292926fcddb53c4d0c9e9e4eb0c70263ea02518ccca6085f6a" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.273709 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f60d614295b1b9292926fcddb53c4d0c9e9e4eb0c70263ea02518ccca6085f6a"} err="failed to get container status \"f60d614295b1b9292926fcddb53c4d0c9e9e4eb0c70263ea02518ccca6085f6a\": rpc error: code = NotFound desc = could not find container \"f60d614295b1b9292926fcddb53c4d0c9e9e4eb0c70263ea02518ccca6085f6a\": container with ID starting with f60d614295b1b9292926fcddb53c4d0c9e9e4eb0c70263ea02518ccca6085f6a not found: ID does not exist" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.273736 4871 scope.go:117] "RemoveContainer" containerID="4211dd4d67ec640f0657d91b5179acf5ee27c54fb03a1e66f3304ac1f93364cf" Jan 28 15:23:49 crc kubenswrapper[4871]: E0128 15:23:49.273933 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4211dd4d67ec640f0657d91b5179acf5ee27c54fb03a1e66f3304ac1f93364cf\": container with ID starting with 4211dd4d67ec640f0657d91b5179acf5ee27c54fb03a1e66f3304ac1f93364cf not found: ID does not exist" containerID="4211dd4d67ec640f0657d91b5179acf5ee27c54fb03a1e66f3304ac1f93364cf" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.273952 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4211dd4d67ec640f0657d91b5179acf5ee27c54fb03a1e66f3304ac1f93364cf"} err="failed to get container status \"4211dd4d67ec640f0657d91b5179acf5ee27c54fb03a1e66f3304ac1f93364cf\": rpc error: code = NotFound desc = could not find container \"4211dd4d67ec640f0657d91b5179acf5ee27c54fb03a1e66f3304ac1f93364cf\": container with ID starting with 4211dd4d67ec640f0657d91b5179acf5ee27c54fb03a1e66f3304ac1f93364cf not found: ID does not exist" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.273965 4871 scope.go:117] "RemoveContainer" containerID="0b16713e8946b700c358d9b1c6eb1db8567dd58bf9276af91cfe20a9bbd97011" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.285493 4871 scope.go:117] "RemoveContainer" containerID="b9e99db464bc9e3a1e0b34932c94c2e7dae3694150811fc884fc475944dcc933" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.295523 4871 scope.go:117] "RemoveContainer" containerID="eef1f938017d3cac965244bb04470f3fb5e2808053a7c55e66d9f563830bff8f" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.308258 4871 scope.go:117] "RemoveContainer" containerID="0b16713e8946b700c358d9b1c6eb1db8567dd58bf9276af91cfe20a9bbd97011" Jan 28 15:23:49 crc kubenswrapper[4871]: E0128 15:23:49.308709 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b16713e8946b700c358d9b1c6eb1db8567dd58bf9276af91cfe20a9bbd97011\": container with ID starting with 0b16713e8946b700c358d9b1c6eb1db8567dd58bf9276af91cfe20a9bbd97011 not found: ID does not exist" containerID="0b16713e8946b700c358d9b1c6eb1db8567dd58bf9276af91cfe20a9bbd97011" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.308749 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b16713e8946b700c358d9b1c6eb1db8567dd58bf9276af91cfe20a9bbd97011"} err="failed to get container status \"0b16713e8946b700c358d9b1c6eb1db8567dd58bf9276af91cfe20a9bbd97011\": rpc error: code = NotFound desc = could not find container \"0b16713e8946b700c358d9b1c6eb1db8567dd58bf9276af91cfe20a9bbd97011\": container with ID starting with 0b16713e8946b700c358d9b1c6eb1db8567dd58bf9276af91cfe20a9bbd97011 not found: ID does not exist" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.308782 4871 scope.go:117] "RemoveContainer" containerID="b9e99db464bc9e3a1e0b34932c94c2e7dae3694150811fc884fc475944dcc933" Jan 28 15:23:49 crc kubenswrapper[4871]: E0128 15:23:49.309025 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9e99db464bc9e3a1e0b34932c94c2e7dae3694150811fc884fc475944dcc933\": container with ID starting with b9e99db464bc9e3a1e0b34932c94c2e7dae3694150811fc884fc475944dcc933 not found: ID does not exist" containerID="b9e99db464bc9e3a1e0b34932c94c2e7dae3694150811fc884fc475944dcc933" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.309053 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9e99db464bc9e3a1e0b34932c94c2e7dae3694150811fc884fc475944dcc933"} err="failed to get container status \"b9e99db464bc9e3a1e0b34932c94c2e7dae3694150811fc884fc475944dcc933\": rpc error: code = NotFound desc = could not find container \"b9e99db464bc9e3a1e0b34932c94c2e7dae3694150811fc884fc475944dcc933\": container with ID starting with b9e99db464bc9e3a1e0b34932c94c2e7dae3694150811fc884fc475944dcc933 not found: ID does not exist" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.309073 4871 scope.go:117] "RemoveContainer" containerID="eef1f938017d3cac965244bb04470f3fb5e2808053a7c55e66d9f563830bff8f" Jan 28 15:23:49 crc kubenswrapper[4871]: E0128 15:23:49.309468 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eef1f938017d3cac965244bb04470f3fb5e2808053a7c55e66d9f563830bff8f\": container with ID starting with eef1f938017d3cac965244bb04470f3fb5e2808053a7c55e66d9f563830bff8f not found: ID does not exist" containerID="eef1f938017d3cac965244bb04470f3fb5e2808053a7c55e66d9f563830bff8f" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.309513 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eef1f938017d3cac965244bb04470f3fb5e2808053a7c55e66d9f563830bff8f"} err="failed to get container status \"eef1f938017d3cac965244bb04470f3fb5e2808053a7c55e66d9f563830bff8f\": rpc error: code = NotFound desc = could not find container \"eef1f938017d3cac965244bb04470f3fb5e2808053a7c55e66d9f563830bff8f\": container with ID starting with eef1f938017d3cac965244bb04470f3fb5e2808053a7c55e66d9f563830bff8f not found: ID does not exist" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.309541 4871 scope.go:117] "RemoveContainer" containerID="1a3764765ae722333d10f0e1d375a7e73470d0cea68ac126b26f4db0d7353731" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.321179 4871 scope.go:117] "RemoveContainer" containerID="a828db6bac502002c6b0a6309a397244276fd3a021809cd9d6faae9ab4eb1b49" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.333879 4871 scope.go:117] "RemoveContainer" containerID="fd5f75951cc8b49e032414221c980a6877a544a2552ed879ff69bdfd7941ef18" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.353113 4871 scope.go:117] "RemoveContainer" containerID="1a3764765ae722333d10f0e1d375a7e73470d0cea68ac126b26f4db0d7353731" Jan 28 15:23:49 crc kubenswrapper[4871]: E0128 15:23:49.353575 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a3764765ae722333d10f0e1d375a7e73470d0cea68ac126b26f4db0d7353731\": container with ID starting with 1a3764765ae722333d10f0e1d375a7e73470d0cea68ac126b26f4db0d7353731 not found: ID does not exist" containerID="1a3764765ae722333d10f0e1d375a7e73470d0cea68ac126b26f4db0d7353731" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.353657 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a3764765ae722333d10f0e1d375a7e73470d0cea68ac126b26f4db0d7353731"} err="failed to get container status \"1a3764765ae722333d10f0e1d375a7e73470d0cea68ac126b26f4db0d7353731\": rpc error: code = NotFound desc = could not find container \"1a3764765ae722333d10f0e1d375a7e73470d0cea68ac126b26f4db0d7353731\": container with ID starting with 1a3764765ae722333d10f0e1d375a7e73470d0cea68ac126b26f4db0d7353731 not found: ID does not exist" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.353696 4871 scope.go:117] "RemoveContainer" containerID="a828db6bac502002c6b0a6309a397244276fd3a021809cd9d6faae9ab4eb1b49" Jan 28 15:23:49 crc kubenswrapper[4871]: E0128 15:23:49.354001 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a828db6bac502002c6b0a6309a397244276fd3a021809cd9d6faae9ab4eb1b49\": container with ID starting with a828db6bac502002c6b0a6309a397244276fd3a021809cd9d6faae9ab4eb1b49 not found: ID does not exist" containerID="a828db6bac502002c6b0a6309a397244276fd3a021809cd9d6faae9ab4eb1b49" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.354029 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a828db6bac502002c6b0a6309a397244276fd3a021809cd9d6faae9ab4eb1b49"} err="failed to get container status \"a828db6bac502002c6b0a6309a397244276fd3a021809cd9d6faae9ab4eb1b49\": rpc error: code = NotFound desc = could not find container \"a828db6bac502002c6b0a6309a397244276fd3a021809cd9d6faae9ab4eb1b49\": container with ID starting with a828db6bac502002c6b0a6309a397244276fd3a021809cd9d6faae9ab4eb1b49 not found: ID does not exist" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.354052 4871 scope.go:117] "RemoveContainer" containerID="fd5f75951cc8b49e032414221c980a6877a544a2552ed879ff69bdfd7941ef18" Jan 28 15:23:49 crc kubenswrapper[4871]: E0128 15:23:49.354217 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd5f75951cc8b49e032414221c980a6877a544a2552ed879ff69bdfd7941ef18\": container with ID starting with fd5f75951cc8b49e032414221c980a6877a544a2552ed879ff69bdfd7941ef18 not found: ID does not exist" containerID="fd5f75951cc8b49e032414221c980a6877a544a2552ed879ff69bdfd7941ef18" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.354243 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd5f75951cc8b49e032414221c980a6877a544a2552ed879ff69bdfd7941ef18"} err="failed to get container status \"fd5f75951cc8b49e032414221c980a6877a544a2552ed879ff69bdfd7941ef18\": rpc error: code = NotFound desc = could not find container \"fd5f75951cc8b49e032414221c980a6877a544a2552ed879ff69bdfd7941ef18\": container with ID starting with fd5f75951cc8b49e032414221c980a6877a544a2552ed879ff69bdfd7941ef18 not found: ID does not exist" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.354261 4871 scope.go:117] "RemoveContainer" containerID="0c3e1b0feb63457dbffe3326079e906f6b122a98866b8dfdcb312fac2b70e47b" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.369811 4871 scope.go:117] "RemoveContainer" containerID="0c3e1b0feb63457dbffe3326079e906f6b122a98866b8dfdcb312fac2b70e47b" Jan 28 15:23:49 crc kubenswrapper[4871]: E0128 15:23:49.370261 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c3e1b0feb63457dbffe3326079e906f6b122a98866b8dfdcb312fac2b70e47b\": container with ID starting with 0c3e1b0feb63457dbffe3326079e906f6b122a98866b8dfdcb312fac2b70e47b not found: ID does not exist" containerID="0c3e1b0feb63457dbffe3326079e906f6b122a98866b8dfdcb312fac2b70e47b" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.370295 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c3e1b0feb63457dbffe3326079e906f6b122a98866b8dfdcb312fac2b70e47b"} err="failed to get container status \"0c3e1b0feb63457dbffe3326079e906f6b122a98866b8dfdcb312fac2b70e47b\": rpc error: code = NotFound desc = could not find container \"0c3e1b0feb63457dbffe3326079e906f6b122a98866b8dfdcb312fac2b70e47b\": container with ID starting with 0c3e1b0feb63457dbffe3326079e906f6b122a98866b8dfdcb312fac2b70e47b not found: ID does not exist" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.469619 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-mnmsx" Jan 28 15:23:49 crc kubenswrapper[4871]: I0128 15:23:49.525570 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vprhz"] Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.209128 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4zcxk"] Jan 28 15:23:50 crc kubenswrapper[4871]: E0128 15:23:50.209758 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6575ba24-dfe2-4f55-96ee-6692928debdd" containerName="registry-server" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.209773 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="6575ba24-dfe2-4f55-96ee-6692928debdd" containerName="registry-server" Jan 28 15:23:50 crc kubenswrapper[4871]: E0128 15:23:50.209784 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9524a883-d083-471d-8c30-866172b8456e" containerName="extract-content" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.209792 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="9524a883-d083-471d-8c30-866172b8456e" containerName="extract-content" Jan 28 15:23:50 crc kubenswrapper[4871]: E0128 15:23:50.209802 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9524a883-d083-471d-8c30-866172b8456e" containerName="registry-server" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.209808 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="9524a883-d083-471d-8c30-866172b8456e" containerName="registry-server" Jan 28 15:23:50 crc kubenswrapper[4871]: E0128 15:23:50.209820 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b83a5ec-9ed6-4e66-9a39-610a39f64d19" containerName="extract-utilities" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.209829 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b83a5ec-9ed6-4e66-9a39-610a39f64d19" containerName="extract-utilities" Jan 28 15:23:50 crc kubenswrapper[4871]: E0128 15:23:50.209840 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a7be4a-2d1b-4b46-a720-4068e3fad906" containerName="marketplace-operator" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.209845 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a7be4a-2d1b-4b46-a720-4068e3fad906" containerName="marketplace-operator" Jan 28 15:23:50 crc kubenswrapper[4871]: E0128 15:23:50.209853 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9524a883-d083-471d-8c30-866172b8456e" containerName="extract-utilities" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.209860 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="9524a883-d083-471d-8c30-866172b8456e" containerName="extract-utilities" Jan 28 15:23:50 crc kubenswrapper[4871]: E0128 15:23:50.209872 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e209b4f2-aaea-496b-8e14-58f2aa8faaa5" containerName="extract-content" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.209880 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="e209b4f2-aaea-496b-8e14-58f2aa8faaa5" containerName="extract-content" Jan 28 15:23:50 crc kubenswrapper[4871]: E0128 15:23:50.209889 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b83a5ec-9ed6-4e66-9a39-610a39f64d19" containerName="registry-server" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.209895 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b83a5ec-9ed6-4e66-9a39-610a39f64d19" containerName="registry-server" Jan 28 15:23:50 crc kubenswrapper[4871]: E0128 15:23:50.209902 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6575ba24-dfe2-4f55-96ee-6692928debdd" containerName="extract-content" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.209910 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="6575ba24-dfe2-4f55-96ee-6692928debdd" containerName="extract-content" Jan 28 15:23:50 crc kubenswrapper[4871]: E0128 15:23:50.209918 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e209b4f2-aaea-496b-8e14-58f2aa8faaa5" containerName="registry-server" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.209924 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="e209b4f2-aaea-496b-8e14-58f2aa8faaa5" containerName="registry-server" Jan 28 15:23:50 crc kubenswrapper[4871]: E0128 15:23:50.209935 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6575ba24-dfe2-4f55-96ee-6692928debdd" containerName="extract-utilities" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.209942 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="6575ba24-dfe2-4f55-96ee-6692928debdd" containerName="extract-utilities" Jan 28 15:23:50 crc kubenswrapper[4871]: E0128 15:23:50.209951 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e209b4f2-aaea-496b-8e14-58f2aa8faaa5" containerName="extract-utilities" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.209959 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="e209b4f2-aaea-496b-8e14-58f2aa8faaa5" containerName="extract-utilities" Jan 28 15:23:50 crc kubenswrapper[4871]: E0128 15:23:50.209969 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b83a5ec-9ed6-4e66-9a39-610a39f64d19" containerName="extract-content" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.209977 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b83a5ec-9ed6-4e66-9a39-610a39f64d19" containerName="extract-content" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.210080 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="9524a883-d083-471d-8c30-866172b8456e" containerName="registry-server" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.210091 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="6575ba24-dfe2-4f55-96ee-6692928debdd" containerName="registry-server" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.210103 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="48a7be4a-2d1b-4b46-a720-4068e3fad906" containerName="marketplace-operator" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.210110 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="e209b4f2-aaea-496b-8e14-58f2aa8faaa5" containerName="registry-server" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.210122 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b83a5ec-9ed6-4e66-9a39-610a39f64d19" containerName="registry-server" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.210952 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4zcxk" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.211104 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zcxk"] Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.214299 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.316169 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0205aa8-3e10-465a-8f2a-26a165b8b32e-catalog-content\") pod \"redhat-marketplace-4zcxk\" (UID: \"e0205aa8-3e10-465a-8f2a-26a165b8b32e\") " pod="openshift-marketplace/redhat-marketplace-4zcxk" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.316235 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf8cn\" (UniqueName: \"kubernetes.io/projected/e0205aa8-3e10-465a-8f2a-26a165b8b32e-kube-api-access-hf8cn\") pod \"redhat-marketplace-4zcxk\" (UID: \"e0205aa8-3e10-465a-8f2a-26a165b8b32e\") " pod="openshift-marketplace/redhat-marketplace-4zcxk" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.316394 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0205aa8-3e10-465a-8f2a-26a165b8b32e-utilities\") pod \"redhat-marketplace-4zcxk\" (UID: \"e0205aa8-3e10-465a-8f2a-26a165b8b32e\") " pod="openshift-marketplace/redhat-marketplace-4zcxk" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.407669 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j8zlq"] Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.408805 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j8zlq" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.409668 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j8zlq"] Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.410823 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.424087 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0205aa8-3e10-465a-8f2a-26a165b8b32e-utilities\") pod \"redhat-marketplace-4zcxk\" (UID: \"e0205aa8-3e10-465a-8f2a-26a165b8b32e\") " pod="openshift-marketplace/redhat-marketplace-4zcxk" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.424161 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0205aa8-3e10-465a-8f2a-26a165b8b32e-catalog-content\") pod \"redhat-marketplace-4zcxk\" (UID: \"e0205aa8-3e10-465a-8f2a-26a165b8b32e\") " pod="openshift-marketplace/redhat-marketplace-4zcxk" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.424200 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf8cn\" (UniqueName: \"kubernetes.io/projected/e0205aa8-3e10-465a-8f2a-26a165b8b32e-kube-api-access-hf8cn\") pod \"redhat-marketplace-4zcxk\" (UID: \"e0205aa8-3e10-465a-8f2a-26a165b8b32e\") " pod="openshift-marketplace/redhat-marketplace-4zcxk" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.424659 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0205aa8-3e10-465a-8f2a-26a165b8b32e-utilities\") pod \"redhat-marketplace-4zcxk\" (UID: \"e0205aa8-3e10-465a-8f2a-26a165b8b32e\") " pod="openshift-marketplace/redhat-marketplace-4zcxk" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.424779 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0205aa8-3e10-465a-8f2a-26a165b8b32e-catalog-content\") pod \"redhat-marketplace-4zcxk\" (UID: \"e0205aa8-3e10-465a-8f2a-26a165b8b32e\") " pod="openshift-marketplace/redhat-marketplace-4zcxk" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.448409 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf8cn\" (UniqueName: \"kubernetes.io/projected/e0205aa8-3e10-465a-8f2a-26a165b8b32e-kube-api-access-hf8cn\") pod \"redhat-marketplace-4zcxk\" (UID: \"e0205aa8-3e10-465a-8f2a-26a165b8b32e\") " pod="openshift-marketplace/redhat-marketplace-4zcxk" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.525323 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bc84e74-a5d1-491f-9fec-7400c66214bc-catalog-content\") pod \"redhat-operators-j8zlq\" (UID: \"3bc84e74-a5d1-491f-9fec-7400c66214bc\") " pod="openshift-marketplace/redhat-operators-j8zlq" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.525398 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bc84e74-a5d1-491f-9fec-7400c66214bc-utilities\") pod \"redhat-operators-j8zlq\" (UID: \"3bc84e74-a5d1-491f-9fec-7400c66214bc\") " pod="openshift-marketplace/redhat-operators-j8zlq" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.525420 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlhkg\" (UniqueName: \"kubernetes.io/projected/3bc84e74-a5d1-491f-9fec-7400c66214bc-kube-api-access-rlhkg\") pod \"redhat-operators-j8zlq\" (UID: \"3bc84e74-a5d1-491f-9fec-7400c66214bc\") " pod="openshift-marketplace/redhat-operators-j8zlq" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.540058 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4zcxk" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.626441 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlhkg\" (UniqueName: \"kubernetes.io/projected/3bc84e74-a5d1-491f-9fec-7400c66214bc-kube-api-access-rlhkg\") pod \"redhat-operators-j8zlq\" (UID: \"3bc84e74-a5d1-491f-9fec-7400c66214bc\") " pod="openshift-marketplace/redhat-operators-j8zlq" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.626818 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bc84e74-a5d1-491f-9fec-7400c66214bc-catalog-content\") pod \"redhat-operators-j8zlq\" (UID: \"3bc84e74-a5d1-491f-9fec-7400c66214bc\") " pod="openshift-marketplace/redhat-operators-j8zlq" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.626892 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bc84e74-a5d1-491f-9fec-7400c66214bc-utilities\") pod \"redhat-operators-j8zlq\" (UID: \"3bc84e74-a5d1-491f-9fec-7400c66214bc\") " pod="openshift-marketplace/redhat-operators-j8zlq" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.627500 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bc84e74-a5d1-491f-9fec-7400c66214bc-catalog-content\") pod \"redhat-operators-j8zlq\" (UID: \"3bc84e74-a5d1-491f-9fec-7400c66214bc\") " pod="openshift-marketplace/redhat-operators-j8zlq" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.627531 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bc84e74-a5d1-491f-9fec-7400c66214bc-utilities\") pod \"redhat-operators-j8zlq\" (UID: \"3bc84e74-a5d1-491f-9fec-7400c66214bc\") " pod="openshift-marketplace/redhat-operators-j8zlq" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.663345 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlhkg\" (UniqueName: \"kubernetes.io/projected/3bc84e74-a5d1-491f-9fec-7400c66214bc-kube-api-access-rlhkg\") pod \"redhat-operators-j8zlq\" (UID: \"3bc84e74-a5d1-491f-9fec-7400c66214bc\") " pod="openshift-marketplace/redhat-operators-j8zlq" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.738124 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j8zlq" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.910554 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48a7be4a-2d1b-4b46-a720-4068e3fad906" path="/var/lib/kubelet/pods/48a7be4a-2d1b-4b46-a720-4068e3fad906/volumes" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.911343 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6575ba24-dfe2-4f55-96ee-6692928debdd" path="/var/lib/kubelet/pods/6575ba24-dfe2-4f55-96ee-6692928debdd/volumes" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.912052 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9524a883-d083-471d-8c30-866172b8456e" path="/var/lib/kubelet/pods/9524a883-d083-471d-8c30-866172b8456e/volumes" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.913247 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b83a5ec-9ed6-4e66-9a39-610a39f64d19" path="/var/lib/kubelet/pods/9b83a5ec-9ed6-4e66-9a39-610a39f64d19/volumes" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.914022 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e209b4f2-aaea-496b-8e14-58f2aa8faaa5" path="/var/lib/kubelet/pods/e209b4f2-aaea-496b-8e14-58f2aa8faaa5/volumes" Jan 28 15:23:50 crc kubenswrapper[4871]: I0128 15:23:50.941010 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zcxk"] Jan 28 15:23:50 crc kubenswrapper[4871]: W0128 15:23:50.945209 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0205aa8_3e10_465a_8f2a_26a165b8b32e.slice/crio-d4a0018bec27acd34682ca89c1fda6e3ae96cf9eb177b5c621fd9fef1d8fe265 WatchSource:0}: Error finding container d4a0018bec27acd34682ca89c1fda6e3ae96cf9eb177b5c621fd9fef1d8fe265: Status 404 returned error can't find the container with id d4a0018bec27acd34682ca89c1fda6e3ae96cf9eb177b5c621fd9fef1d8fe265 Jan 28 15:23:51 crc kubenswrapper[4871]: I0128 15:23:51.117038 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j8zlq"] Jan 28 15:23:51 crc kubenswrapper[4871]: I0128 15:23:51.160546 4871 generic.go:334] "Generic (PLEG): container finished" podID="e0205aa8-3e10-465a-8f2a-26a165b8b32e" containerID="0e9d957ebf26006c80df4010e20eb213968c11b9d5fc18663b11f00bda8d3b3c" exitCode=0 Jan 28 15:23:51 crc kubenswrapper[4871]: I0128 15:23:51.160695 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zcxk" event={"ID":"e0205aa8-3e10-465a-8f2a-26a165b8b32e","Type":"ContainerDied","Data":"0e9d957ebf26006c80df4010e20eb213968c11b9d5fc18663b11f00bda8d3b3c"} Jan 28 15:23:51 crc kubenswrapper[4871]: I0128 15:23:51.160756 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zcxk" event={"ID":"e0205aa8-3e10-465a-8f2a-26a165b8b32e","Type":"ContainerStarted","Data":"d4a0018bec27acd34682ca89c1fda6e3ae96cf9eb177b5c621fd9fef1d8fe265"} Jan 28 15:23:51 crc kubenswrapper[4871]: W0128 15:23:51.178274 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bc84e74_a5d1_491f_9fec_7400c66214bc.slice/crio-ba86054939933b6b7991c658da006b3bbe4026508609f3dc7458c6901f66cb57 WatchSource:0}: Error finding container ba86054939933b6b7991c658da006b3bbe4026508609f3dc7458c6901f66cb57: Status 404 returned error can't find the container with id ba86054939933b6b7991c658da006b3bbe4026508609f3dc7458c6901f66cb57 Jan 28 15:23:52 crc kubenswrapper[4871]: I0128 15:23:52.166731 4871 generic.go:334] "Generic (PLEG): container finished" podID="3bc84e74-a5d1-491f-9fec-7400c66214bc" containerID="4820c1e908d3cc9c4b837de364740995013966647c5604c25e98a0923f963014" exitCode=0 Jan 28 15:23:52 crc kubenswrapper[4871]: I0128 15:23:52.166787 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8zlq" event={"ID":"3bc84e74-a5d1-491f-9fec-7400c66214bc","Type":"ContainerDied","Data":"4820c1e908d3cc9c4b837de364740995013966647c5604c25e98a0923f963014"} Jan 28 15:23:52 crc kubenswrapper[4871]: I0128 15:23:52.167036 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8zlq" event={"ID":"3bc84e74-a5d1-491f-9fec-7400c66214bc","Type":"ContainerStarted","Data":"ba86054939933b6b7991c658da006b3bbe4026508609f3dc7458c6901f66cb57"} Jan 28 15:23:52 crc kubenswrapper[4871]: I0128 15:23:52.605428 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-twk99"] Jan 28 15:23:52 crc kubenswrapper[4871]: I0128 15:23:52.607301 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-twk99" Jan 28 15:23:52 crc kubenswrapper[4871]: I0128 15:23:52.611011 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 15:23:52 crc kubenswrapper[4871]: I0128 15:23:52.613108 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-twk99"] Jan 28 15:23:52 crc kubenswrapper[4871]: I0128 15:23:52.753740 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2ztz\" (UniqueName: \"kubernetes.io/projected/7db86131-17b1-4d13-9a6b-469419099f0e-kube-api-access-w2ztz\") pod \"certified-operators-twk99\" (UID: \"7db86131-17b1-4d13-9a6b-469419099f0e\") " pod="openshift-marketplace/certified-operators-twk99" Jan 28 15:23:52 crc kubenswrapper[4871]: I0128 15:23:52.753789 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7db86131-17b1-4d13-9a6b-469419099f0e-utilities\") pod \"certified-operators-twk99\" (UID: \"7db86131-17b1-4d13-9a6b-469419099f0e\") " pod="openshift-marketplace/certified-operators-twk99" Jan 28 15:23:52 crc kubenswrapper[4871]: I0128 15:23:52.754052 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7db86131-17b1-4d13-9a6b-469419099f0e-catalog-content\") pod \"certified-operators-twk99\" (UID: \"7db86131-17b1-4d13-9a6b-469419099f0e\") " pod="openshift-marketplace/certified-operators-twk99" Jan 28 15:23:52 crc kubenswrapper[4871]: I0128 15:23:52.805490 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lhxxb"] Jan 28 15:23:52 crc kubenswrapper[4871]: I0128 15:23:52.806544 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lhxxb" Jan 28 15:23:52 crc kubenswrapper[4871]: I0128 15:23:52.810769 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 15:23:52 crc kubenswrapper[4871]: I0128 15:23:52.822199 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lhxxb"] Jan 28 15:23:52 crc kubenswrapper[4871]: I0128 15:23:52.855311 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7db86131-17b1-4d13-9a6b-469419099f0e-catalog-content\") pod \"certified-operators-twk99\" (UID: \"7db86131-17b1-4d13-9a6b-469419099f0e\") " pod="openshift-marketplace/certified-operators-twk99" Jan 28 15:23:52 crc kubenswrapper[4871]: I0128 15:23:52.855365 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2ztz\" (UniqueName: \"kubernetes.io/projected/7db86131-17b1-4d13-9a6b-469419099f0e-kube-api-access-w2ztz\") pod \"certified-operators-twk99\" (UID: \"7db86131-17b1-4d13-9a6b-469419099f0e\") " pod="openshift-marketplace/certified-operators-twk99" Jan 28 15:23:52 crc kubenswrapper[4871]: I0128 15:23:52.855389 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7db86131-17b1-4d13-9a6b-469419099f0e-utilities\") pod \"certified-operators-twk99\" (UID: \"7db86131-17b1-4d13-9a6b-469419099f0e\") " pod="openshift-marketplace/certified-operators-twk99" Jan 28 15:23:52 crc kubenswrapper[4871]: I0128 15:23:52.856036 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7db86131-17b1-4d13-9a6b-469419099f0e-catalog-content\") pod \"certified-operators-twk99\" (UID: \"7db86131-17b1-4d13-9a6b-469419099f0e\") " pod="openshift-marketplace/certified-operators-twk99" Jan 28 15:23:52 crc kubenswrapper[4871]: I0128 15:23:52.856036 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7db86131-17b1-4d13-9a6b-469419099f0e-utilities\") pod \"certified-operators-twk99\" (UID: \"7db86131-17b1-4d13-9a6b-469419099f0e\") " pod="openshift-marketplace/certified-operators-twk99" Jan 28 15:23:52 crc kubenswrapper[4871]: I0128 15:23:52.895652 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2ztz\" (UniqueName: \"kubernetes.io/projected/7db86131-17b1-4d13-9a6b-469419099f0e-kube-api-access-w2ztz\") pod \"certified-operators-twk99\" (UID: \"7db86131-17b1-4d13-9a6b-469419099f0e\") " pod="openshift-marketplace/certified-operators-twk99" Jan 28 15:23:52 crc kubenswrapper[4871]: I0128 15:23:52.924279 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-twk99" Jan 28 15:23:52 crc kubenswrapper[4871]: I0128 15:23:52.957869 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrrtk\" (UniqueName: \"kubernetes.io/projected/6ff5567c-418f-4f43-9839-373c20d07017-kube-api-access-vrrtk\") pod \"community-operators-lhxxb\" (UID: \"6ff5567c-418f-4f43-9839-373c20d07017\") " pod="openshift-marketplace/community-operators-lhxxb" Jan 28 15:23:52 crc kubenswrapper[4871]: I0128 15:23:52.957980 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff5567c-418f-4f43-9839-373c20d07017-utilities\") pod \"community-operators-lhxxb\" (UID: \"6ff5567c-418f-4f43-9839-373c20d07017\") " pod="openshift-marketplace/community-operators-lhxxb" Jan 28 15:23:52 crc kubenswrapper[4871]: I0128 15:23:52.957998 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff5567c-418f-4f43-9839-373c20d07017-catalog-content\") pod \"community-operators-lhxxb\" (UID: \"6ff5567c-418f-4f43-9839-373c20d07017\") " pod="openshift-marketplace/community-operators-lhxxb" Jan 28 15:23:53 crc kubenswrapper[4871]: I0128 15:23:53.059343 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff5567c-418f-4f43-9839-373c20d07017-utilities\") pod \"community-operators-lhxxb\" (UID: \"6ff5567c-418f-4f43-9839-373c20d07017\") " pod="openshift-marketplace/community-operators-lhxxb" Jan 28 15:23:53 crc kubenswrapper[4871]: I0128 15:23:53.059663 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff5567c-418f-4f43-9839-373c20d07017-catalog-content\") pod \"community-operators-lhxxb\" (UID: \"6ff5567c-418f-4f43-9839-373c20d07017\") " pod="openshift-marketplace/community-operators-lhxxb" Jan 28 15:23:53 crc kubenswrapper[4871]: I0128 15:23:53.059775 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrrtk\" (UniqueName: \"kubernetes.io/projected/6ff5567c-418f-4f43-9839-373c20d07017-kube-api-access-vrrtk\") pod \"community-operators-lhxxb\" (UID: \"6ff5567c-418f-4f43-9839-373c20d07017\") " pod="openshift-marketplace/community-operators-lhxxb" Jan 28 15:23:53 crc kubenswrapper[4871]: I0128 15:23:53.060366 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff5567c-418f-4f43-9839-373c20d07017-utilities\") pod \"community-operators-lhxxb\" (UID: \"6ff5567c-418f-4f43-9839-373c20d07017\") " pod="openshift-marketplace/community-operators-lhxxb" Jan 28 15:23:53 crc kubenswrapper[4871]: I0128 15:23:53.060381 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff5567c-418f-4f43-9839-373c20d07017-catalog-content\") pod \"community-operators-lhxxb\" (UID: \"6ff5567c-418f-4f43-9839-373c20d07017\") " pod="openshift-marketplace/community-operators-lhxxb" Jan 28 15:23:53 crc kubenswrapper[4871]: I0128 15:23:53.083485 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrrtk\" (UniqueName: \"kubernetes.io/projected/6ff5567c-418f-4f43-9839-373c20d07017-kube-api-access-vrrtk\") pod \"community-operators-lhxxb\" (UID: \"6ff5567c-418f-4f43-9839-373c20d07017\") " pod="openshift-marketplace/community-operators-lhxxb" Jan 28 15:23:53 crc kubenswrapper[4871]: I0128 15:23:53.120075 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lhxxb" Jan 28 15:23:53 crc kubenswrapper[4871]: I0128 15:23:53.175978 4871 generic.go:334] "Generic (PLEG): container finished" podID="e0205aa8-3e10-465a-8f2a-26a165b8b32e" containerID="da6e6154280e57d9d8a15f3b683849312c14da763376862ebba6fce56d3a0571" exitCode=0 Jan 28 15:23:53 crc kubenswrapper[4871]: I0128 15:23:53.176077 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zcxk" event={"ID":"e0205aa8-3e10-465a-8f2a-26a165b8b32e","Type":"ContainerDied","Data":"da6e6154280e57d9d8a15f3b683849312c14da763376862ebba6fce56d3a0571"} Jan 28 15:23:53 crc kubenswrapper[4871]: I0128 15:23:53.183100 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8zlq" event={"ID":"3bc84e74-a5d1-491f-9fec-7400c66214bc","Type":"ContainerStarted","Data":"2afe34c0bbe534ba52d1760f3e56308ed74df40497e70f8194b71098e9f246ac"} Jan 28 15:23:53 crc kubenswrapper[4871]: I0128 15:23:53.368302 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-twk99"] Jan 28 15:23:53 crc kubenswrapper[4871]: W0128 15:23:53.375962 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7db86131_17b1_4d13_9a6b_469419099f0e.slice/crio-11a4da9cc42fa90e4f8943762628491b95629f6ccc3e942664cb92c7b42746df WatchSource:0}: Error finding container 11a4da9cc42fa90e4f8943762628491b95629f6ccc3e942664cb92c7b42746df: Status 404 returned error can't find the container with id 11a4da9cc42fa90e4f8943762628491b95629f6ccc3e942664cb92c7b42746df Jan 28 15:23:53 crc kubenswrapper[4871]: I0128 15:23:53.585436 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lhxxb"] Jan 28 15:23:53 crc kubenswrapper[4871]: W0128 15:23:53.596240 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ff5567c_418f_4f43_9839_373c20d07017.slice/crio-5ea419e96f6a9a59ab948dd144d176da8f09c77f4a02d1aed67c10903be5742a WatchSource:0}: Error finding container 5ea419e96f6a9a59ab948dd144d176da8f09c77f4a02d1aed67c10903be5742a: Status 404 returned error can't find the container with id 5ea419e96f6a9a59ab948dd144d176da8f09c77f4a02d1aed67c10903be5742a Jan 28 15:23:54 crc kubenswrapper[4871]: I0128 15:23:54.190221 4871 generic.go:334] "Generic (PLEG): container finished" podID="3bc84e74-a5d1-491f-9fec-7400c66214bc" containerID="2afe34c0bbe534ba52d1760f3e56308ed74df40497e70f8194b71098e9f246ac" exitCode=0 Jan 28 15:23:54 crc kubenswrapper[4871]: I0128 15:23:54.190316 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8zlq" event={"ID":"3bc84e74-a5d1-491f-9fec-7400c66214bc","Type":"ContainerDied","Data":"2afe34c0bbe534ba52d1760f3e56308ed74df40497e70f8194b71098e9f246ac"} Jan 28 15:23:54 crc kubenswrapper[4871]: I0128 15:23:54.196419 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zcxk" event={"ID":"e0205aa8-3e10-465a-8f2a-26a165b8b32e","Type":"ContainerStarted","Data":"0b97906a76b1eee20b1b7193e2e3a147c8a9da9d9b120a7a0474a4fb0385f9e8"} Jan 28 15:23:54 crc kubenswrapper[4871]: I0128 15:23:54.197965 4871 generic.go:334] "Generic (PLEG): container finished" podID="6ff5567c-418f-4f43-9839-373c20d07017" containerID="4dd4528db2ca108ba48faba818c1f8f70da1353944e31abdadf83d35356f03de" exitCode=0 Jan 28 15:23:54 crc kubenswrapper[4871]: I0128 15:23:54.198000 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhxxb" event={"ID":"6ff5567c-418f-4f43-9839-373c20d07017","Type":"ContainerDied","Data":"4dd4528db2ca108ba48faba818c1f8f70da1353944e31abdadf83d35356f03de"} Jan 28 15:23:54 crc kubenswrapper[4871]: I0128 15:23:54.198014 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhxxb" event={"ID":"6ff5567c-418f-4f43-9839-373c20d07017","Type":"ContainerStarted","Data":"5ea419e96f6a9a59ab948dd144d176da8f09c77f4a02d1aed67c10903be5742a"} Jan 28 15:23:54 crc kubenswrapper[4871]: I0128 15:23:54.200339 4871 generic.go:334] "Generic (PLEG): container finished" podID="7db86131-17b1-4d13-9a6b-469419099f0e" containerID="5476268d3499ff67f4eb3bdc23751320aaf65bc7dec217618ad602f71608ae68" exitCode=0 Jan 28 15:23:54 crc kubenswrapper[4871]: I0128 15:23:54.200357 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twk99" event={"ID":"7db86131-17b1-4d13-9a6b-469419099f0e","Type":"ContainerDied","Data":"5476268d3499ff67f4eb3bdc23751320aaf65bc7dec217618ad602f71608ae68"} Jan 28 15:23:54 crc kubenswrapper[4871]: I0128 15:23:54.200369 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twk99" event={"ID":"7db86131-17b1-4d13-9a6b-469419099f0e","Type":"ContainerStarted","Data":"11a4da9cc42fa90e4f8943762628491b95629f6ccc3e942664cb92c7b42746df"} Jan 28 15:23:54 crc kubenswrapper[4871]: I0128 15:23:54.264116 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4zcxk" podStartSLOduration=1.726762201 podStartE2EDuration="4.264101617s" podCreationTimestamp="2026-01-28 15:23:50 +0000 UTC" firstStartedPulling="2026-01-28 15:23:51.162606444 +0000 UTC m=+383.058444766" lastFinishedPulling="2026-01-28 15:23:53.69994584 +0000 UTC m=+385.595784182" observedRunningTime="2026-01-28 15:23:54.259629857 +0000 UTC m=+386.155468179" watchObservedRunningTime="2026-01-28 15:23:54.264101617 +0000 UTC m=+386.159939939" Jan 28 15:23:55 crc kubenswrapper[4871]: I0128 15:23:55.208853 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhxxb" event={"ID":"6ff5567c-418f-4f43-9839-373c20d07017","Type":"ContainerStarted","Data":"10fbcfad89279690cc6492cef0d7622431166e93a953658c2a35a41e48e2d072"} Jan 28 15:23:55 crc kubenswrapper[4871]: I0128 15:23:55.211030 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twk99" event={"ID":"7db86131-17b1-4d13-9a6b-469419099f0e","Type":"ContainerStarted","Data":"98d8079cc94fe102f70a6012a9045bdbbb4679d0f0428852c6271a81d5917d8e"} Jan 28 15:23:56 crc kubenswrapper[4871]: I0128 15:23:56.218121 4871 generic.go:334] "Generic (PLEG): container finished" podID="6ff5567c-418f-4f43-9839-373c20d07017" containerID="10fbcfad89279690cc6492cef0d7622431166e93a953658c2a35a41e48e2d072" exitCode=0 Jan 28 15:23:56 crc kubenswrapper[4871]: I0128 15:23:56.218209 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhxxb" event={"ID":"6ff5567c-418f-4f43-9839-373c20d07017","Type":"ContainerDied","Data":"10fbcfad89279690cc6492cef0d7622431166e93a953658c2a35a41e48e2d072"} Jan 28 15:23:56 crc kubenswrapper[4871]: I0128 15:23:56.221610 4871 generic.go:334] "Generic (PLEG): container finished" podID="7db86131-17b1-4d13-9a6b-469419099f0e" containerID="98d8079cc94fe102f70a6012a9045bdbbb4679d0f0428852c6271a81d5917d8e" exitCode=0 Jan 28 15:23:56 crc kubenswrapper[4871]: I0128 15:23:56.221665 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twk99" event={"ID":"7db86131-17b1-4d13-9a6b-469419099f0e","Type":"ContainerDied","Data":"98d8079cc94fe102f70a6012a9045bdbbb4679d0f0428852c6271a81d5917d8e"} Jan 28 15:23:56 crc kubenswrapper[4871]: I0128 15:23:56.236251 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8zlq" event={"ID":"3bc84e74-a5d1-491f-9fec-7400c66214bc","Type":"ContainerStarted","Data":"e29844274fa88938aa6c728153144accef87b930b52ed8835b9d3b154ebf1f3e"} Jan 28 15:23:56 crc kubenswrapper[4871]: I0128 15:23:56.277096 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j8zlq" podStartSLOduration=2.697218066 podStartE2EDuration="6.277077361s" podCreationTimestamp="2026-01-28 15:23:50 +0000 UTC" firstStartedPulling="2026-01-28 15:23:52.168093524 +0000 UTC m=+384.063931846" lastFinishedPulling="2026-01-28 15:23:55.747952809 +0000 UTC m=+387.643791141" observedRunningTime="2026-01-28 15:23:56.271614779 +0000 UTC m=+388.167453101" watchObservedRunningTime="2026-01-28 15:23:56.277077361 +0000 UTC m=+388.172915683" Jan 28 15:23:57 crc kubenswrapper[4871]: I0128 15:23:57.242975 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhxxb" event={"ID":"6ff5567c-418f-4f43-9839-373c20d07017","Type":"ContainerStarted","Data":"e27015b2d1a1e46de17dad98b2ba5d88d9a4cf811a128ef03ee5898dfb477170"} Jan 28 15:23:57 crc kubenswrapper[4871]: I0128 15:23:57.245973 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twk99" event={"ID":"7db86131-17b1-4d13-9a6b-469419099f0e","Type":"ContainerStarted","Data":"0cacc887f187af71b832ec42f9dc0bfd43dfbf646dacc976b13f57927b58d28e"} Jan 28 15:23:57 crc kubenswrapper[4871]: I0128 15:23:57.263861 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lhxxb" podStartSLOduration=2.752103343 podStartE2EDuration="5.263834621s" podCreationTimestamp="2026-01-28 15:23:52 +0000 UTC" firstStartedPulling="2026-01-28 15:23:54.199111397 +0000 UTC m=+386.094949719" lastFinishedPulling="2026-01-28 15:23:56.710842675 +0000 UTC m=+388.606680997" observedRunningTime="2026-01-28 15:23:57.259745141 +0000 UTC m=+389.155583473" watchObservedRunningTime="2026-01-28 15:23:57.263834621 +0000 UTC m=+389.159672943" Jan 28 15:23:57 crc kubenswrapper[4871]: I0128 15:23:57.288101 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-twk99" podStartSLOduration=2.579941201 podStartE2EDuration="5.288083615s" podCreationTimestamp="2026-01-28 15:23:52 +0000 UTC" firstStartedPulling="2026-01-28 15:23:54.201135961 +0000 UTC m=+386.096974283" lastFinishedPulling="2026-01-28 15:23:56.909278375 +0000 UTC m=+388.805116697" observedRunningTime="2026-01-28 15:23:57.286134074 +0000 UTC m=+389.181972396" watchObservedRunningTime="2026-01-28 15:23:57.288083615 +0000 UTC m=+389.183921937" Jan 28 15:24:00 crc kubenswrapper[4871]: I0128 15:24:00.541441 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4zcxk" Jan 28 15:24:00 crc kubenswrapper[4871]: I0128 15:24:00.542137 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4zcxk" Jan 28 15:24:00 crc kubenswrapper[4871]: I0128 15:24:00.594973 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4zcxk" Jan 28 15:24:00 crc kubenswrapper[4871]: I0128 15:24:00.738333 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j8zlq" Jan 28 15:24:00 crc kubenswrapper[4871]: I0128 15:24:00.738389 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j8zlq" Jan 28 15:24:01 crc kubenswrapper[4871]: I0128 15:24:01.327029 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4zcxk" Jan 28 15:24:01 crc kubenswrapper[4871]: I0128 15:24:01.779877 4871 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j8zlq" podUID="3bc84e74-a5d1-491f-9fec-7400c66214bc" containerName="registry-server" probeResult="failure" output=< Jan 28 15:24:01 crc kubenswrapper[4871]: timeout: failed to connect service ":50051" within 1s Jan 28 15:24:01 crc kubenswrapper[4871]: > Jan 28 15:24:02 crc kubenswrapper[4871]: I0128 15:24:02.925079 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-twk99" Jan 28 15:24:02 crc kubenswrapper[4871]: I0128 15:24:02.925485 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-twk99" Jan 28 15:24:02 crc kubenswrapper[4871]: I0128 15:24:02.964000 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-twk99" Jan 28 15:24:03 crc kubenswrapper[4871]: I0128 15:24:03.120511 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lhxxb" Jan 28 15:24:03 crc kubenswrapper[4871]: I0128 15:24:03.120578 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lhxxb" Jan 28 15:24:03 crc kubenswrapper[4871]: I0128 15:24:03.161481 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lhxxb" Jan 28 15:24:03 crc kubenswrapper[4871]: I0128 15:24:03.327169 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lhxxb" Jan 28 15:24:03 crc kubenswrapper[4871]: I0128 15:24:03.336454 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-twk99" Jan 28 15:24:10 crc kubenswrapper[4871]: I0128 15:24:10.779543 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j8zlq" Jan 28 15:24:10 crc kubenswrapper[4871]: I0128 15:24:10.824483 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j8zlq" Jan 28 15:24:13 crc kubenswrapper[4871]: I0128 15:24:13.813977 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:24:13 crc kubenswrapper[4871]: I0128 15:24:13.814055 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:24:14 crc kubenswrapper[4871]: I0128 15:24:14.569291 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" podUID="ee25a7d4-5043-48d7-91d1-68f2af96109a" containerName="registry" containerID="cri-o://302a76e553c56910bd5e7c282d2a3bb25956e6b787536255e09cdd28263a5967" gracePeriod=30 Jan 28 15:24:14 crc kubenswrapper[4871]: I0128 15:24:14.944165 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.052906 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w44gq\" (UniqueName: \"kubernetes.io/projected/ee25a7d4-5043-48d7-91d1-68f2af96109a-kube-api-access-w44gq\") pod \"ee25a7d4-5043-48d7-91d1-68f2af96109a\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.053133 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ee25a7d4-5043-48d7-91d1-68f2af96109a\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.053167 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee25a7d4-5043-48d7-91d1-68f2af96109a-trusted-ca\") pod \"ee25a7d4-5043-48d7-91d1-68f2af96109a\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.053475 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ee25a7d4-5043-48d7-91d1-68f2af96109a-registry-tls\") pod \"ee25a7d4-5043-48d7-91d1-68f2af96109a\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.053525 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ee25a7d4-5043-48d7-91d1-68f2af96109a-registry-certificates\") pod \"ee25a7d4-5043-48d7-91d1-68f2af96109a\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.053616 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ee25a7d4-5043-48d7-91d1-68f2af96109a-bound-sa-token\") pod \"ee25a7d4-5043-48d7-91d1-68f2af96109a\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.053893 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee25a7d4-5043-48d7-91d1-68f2af96109a-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ee25a7d4-5043-48d7-91d1-68f2af96109a" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.054132 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee25a7d4-5043-48d7-91d1-68f2af96109a-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ee25a7d4-5043-48d7-91d1-68f2af96109a" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.053870 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ee25a7d4-5043-48d7-91d1-68f2af96109a-installation-pull-secrets\") pod \"ee25a7d4-5043-48d7-91d1-68f2af96109a\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.054219 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ee25a7d4-5043-48d7-91d1-68f2af96109a-ca-trust-extracted\") pod \"ee25a7d4-5043-48d7-91d1-68f2af96109a\" (UID: \"ee25a7d4-5043-48d7-91d1-68f2af96109a\") " Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.056047 4871 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ee25a7d4-5043-48d7-91d1-68f2af96109a-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.056080 4871 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee25a7d4-5043-48d7-91d1-68f2af96109a-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.058696 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee25a7d4-5043-48d7-91d1-68f2af96109a-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ee25a7d4-5043-48d7-91d1-68f2af96109a" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.059299 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee25a7d4-5043-48d7-91d1-68f2af96109a-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ee25a7d4-5043-48d7-91d1-68f2af96109a" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.065913 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee25a7d4-5043-48d7-91d1-68f2af96109a-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ee25a7d4-5043-48d7-91d1-68f2af96109a" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.066061 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee25a7d4-5043-48d7-91d1-68f2af96109a-kube-api-access-w44gq" (OuterVolumeSpecName: "kube-api-access-w44gq") pod "ee25a7d4-5043-48d7-91d1-68f2af96109a" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a"). InnerVolumeSpecName "kube-api-access-w44gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.068198 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ee25a7d4-5043-48d7-91d1-68f2af96109a" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.072339 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee25a7d4-5043-48d7-91d1-68f2af96109a-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ee25a7d4-5043-48d7-91d1-68f2af96109a" (UID: "ee25a7d4-5043-48d7-91d1-68f2af96109a"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.157157 4871 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ee25a7d4-5043-48d7-91d1-68f2af96109a-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.157198 4871 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ee25a7d4-5043-48d7-91d1-68f2af96109a-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.157211 4871 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ee25a7d4-5043-48d7-91d1-68f2af96109a-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.157222 4871 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ee25a7d4-5043-48d7-91d1-68f2af96109a-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.157231 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w44gq\" (UniqueName: \"kubernetes.io/projected/ee25a7d4-5043-48d7-91d1-68f2af96109a-kube-api-access-w44gq\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.348418 4871 generic.go:334] "Generic (PLEG): container finished" podID="ee25a7d4-5043-48d7-91d1-68f2af96109a" containerID="302a76e553c56910bd5e7c282d2a3bb25956e6b787536255e09cdd28263a5967" exitCode=0 Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.348474 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" event={"ID":"ee25a7d4-5043-48d7-91d1-68f2af96109a","Type":"ContainerDied","Data":"302a76e553c56910bd5e7c282d2a3bb25956e6b787536255e09cdd28263a5967"} Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.348480 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.348511 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vprhz" event={"ID":"ee25a7d4-5043-48d7-91d1-68f2af96109a","Type":"ContainerDied","Data":"17ddab776518c0d7f89a99f535010a783d9f356ea56b08117c1668e7ea6b37e9"} Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.348533 4871 scope.go:117] "RemoveContainer" containerID="302a76e553c56910bd5e7c282d2a3bb25956e6b787536255e09cdd28263a5967" Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.364046 4871 scope.go:117] "RemoveContainer" containerID="302a76e553c56910bd5e7c282d2a3bb25956e6b787536255e09cdd28263a5967" Jan 28 15:24:15 crc kubenswrapper[4871]: E0128 15:24:15.364399 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"302a76e553c56910bd5e7c282d2a3bb25956e6b787536255e09cdd28263a5967\": container with ID starting with 302a76e553c56910bd5e7c282d2a3bb25956e6b787536255e09cdd28263a5967 not found: ID does not exist" containerID="302a76e553c56910bd5e7c282d2a3bb25956e6b787536255e09cdd28263a5967" Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.364434 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"302a76e553c56910bd5e7c282d2a3bb25956e6b787536255e09cdd28263a5967"} err="failed to get container status \"302a76e553c56910bd5e7c282d2a3bb25956e6b787536255e09cdd28263a5967\": rpc error: code = NotFound desc = could not find container \"302a76e553c56910bd5e7c282d2a3bb25956e6b787536255e09cdd28263a5967\": container with ID starting with 302a76e553c56910bd5e7c282d2a3bb25956e6b787536255e09cdd28263a5967 not found: ID does not exist" Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.379055 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vprhz"] Jan 28 15:24:15 crc kubenswrapper[4871]: I0128 15:24:15.384914 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vprhz"] Jan 28 15:24:16 crc kubenswrapper[4871]: I0128 15:24:16.912034 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee25a7d4-5043-48d7-91d1-68f2af96109a" path="/var/lib/kubelet/pods/ee25a7d4-5043-48d7-91d1-68f2af96109a/volumes" Jan 28 15:24:43 crc kubenswrapper[4871]: I0128 15:24:43.813500 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:24:43 crc kubenswrapper[4871]: I0128 15:24:43.814253 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:24:43 crc kubenswrapper[4871]: I0128 15:24:43.814324 4871 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 15:24:43 crc kubenswrapper[4871]: I0128 15:24:43.814965 4871 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6d2a91b27216ac0ed31be3b15ce348b7ccc9fb4adf015bf16e3f9e5058afd244"} pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:24:43 crc kubenswrapper[4871]: I0128 15:24:43.815037 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" containerID="cri-o://6d2a91b27216ac0ed31be3b15ce348b7ccc9fb4adf015bf16e3f9e5058afd244" gracePeriod=600 Jan 28 15:24:44 crc kubenswrapper[4871]: I0128 15:24:44.521860 4871 generic.go:334] "Generic (PLEG): container finished" podID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerID="6d2a91b27216ac0ed31be3b15ce348b7ccc9fb4adf015bf16e3f9e5058afd244" exitCode=0 Jan 28 15:24:44 crc kubenswrapper[4871]: I0128 15:24:44.521942 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerDied","Data":"6d2a91b27216ac0ed31be3b15ce348b7ccc9fb4adf015bf16e3f9e5058afd244"} Jan 28 15:24:44 crc kubenswrapper[4871]: I0128 15:24:44.522488 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerStarted","Data":"90d6822a584774ad2a67fa1ad8223c539293bc3a8bfaabea013dfe8e391f8ad2"} Jan 28 15:24:44 crc kubenswrapper[4871]: I0128 15:24:44.522530 4871 scope.go:117] "RemoveContainer" containerID="bc39c6865d02dd3f85f379df66133847e143e74d6b083015d0e30402088a16ea" Jan 28 15:27:13 crc kubenswrapper[4871]: I0128 15:27:13.813862 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:27:13 crc kubenswrapper[4871]: I0128 15:27:13.814478 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:27:43 crc kubenswrapper[4871]: I0128 15:27:43.813087 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:27:43 crc kubenswrapper[4871]: I0128 15:27:43.814494 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:28:13 crc kubenswrapper[4871]: I0128 15:28:13.813803 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:28:13 crc kubenswrapper[4871]: I0128 15:28:13.814383 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:28:13 crc kubenswrapper[4871]: I0128 15:28:13.814440 4871 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 15:28:13 crc kubenswrapper[4871]: I0128 15:28:13.815196 4871 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"90d6822a584774ad2a67fa1ad8223c539293bc3a8bfaabea013dfe8e391f8ad2"} pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:28:13 crc kubenswrapper[4871]: I0128 15:28:13.815297 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" containerID="cri-o://90d6822a584774ad2a67fa1ad8223c539293bc3a8bfaabea013dfe8e391f8ad2" gracePeriod=600 Jan 28 15:28:14 crc kubenswrapper[4871]: I0128 15:28:14.786246 4871 generic.go:334] "Generic (PLEG): container finished" podID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerID="90d6822a584774ad2a67fa1ad8223c539293bc3a8bfaabea013dfe8e391f8ad2" exitCode=0 Jan 28 15:28:14 crc kubenswrapper[4871]: I0128 15:28:14.786291 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerDied","Data":"90d6822a584774ad2a67fa1ad8223c539293bc3a8bfaabea013dfe8e391f8ad2"} Jan 28 15:28:14 crc kubenswrapper[4871]: I0128 15:28:14.786322 4871 scope.go:117] "RemoveContainer" containerID="6d2a91b27216ac0ed31be3b15ce348b7ccc9fb4adf015bf16e3f9e5058afd244" Jan 28 15:28:15 crc kubenswrapper[4871]: I0128 15:28:15.791534 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerStarted","Data":"99a5b6a3a56a0129d3e0910f8bee719a8f441dd30871a64b674173f9c9123f18"} Jan 28 15:30:00 crc kubenswrapper[4871]: I0128 15:30:00.165826 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493570-rnmdz"] Jan 28 15:30:00 crc kubenswrapper[4871]: E0128 15:30:00.166838 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee25a7d4-5043-48d7-91d1-68f2af96109a" containerName="registry" Jan 28 15:30:00 crc kubenswrapper[4871]: I0128 15:30:00.166854 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee25a7d4-5043-48d7-91d1-68f2af96109a" containerName="registry" Jan 28 15:30:00 crc kubenswrapper[4871]: I0128 15:30:00.166969 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee25a7d4-5043-48d7-91d1-68f2af96109a" containerName="registry" Jan 28 15:30:00 crc kubenswrapper[4871]: I0128 15:30:00.167392 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-rnmdz" Jan 28 15:30:00 crc kubenswrapper[4871]: I0128 15:30:00.169577 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 15:30:00 crc kubenswrapper[4871]: I0128 15:30:00.169770 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 15:30:00 crc kubenswrapper[4871]: I0128 15:30:00.174726 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493570-rnmdz"] Jan 28 15:30:00 crc kubenswrapper[4871]: I0128 15:30:00.239770 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e4aacaf4-4e08-4206-9dd2-82ab80507450-secret-volume\") pod \"collect-profiles-29493570-rnmdz\" (UID: \"e4aacaf4-4e08-4206-9dd2-82ab80507450\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-rnmdz" Jan 28 15:30:00 crc kubenswrapper[4871]: I0128 15:30:00.239825 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4aacaf4-4e08-4206-9dd2-82ab80507450-config-volume\") pod \"collect-profiles-29493570-rnmdz\" (UID: \"e4aacaf4-4e08-4206-9dd2-82ab80507450\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-rnmdz" Jan 28 15:30:00 crc kubenswrapper[4871]: I0128 15:30:00.239858 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cm7h\" (UniqueName: \"kubernetes.io/projected/e4aacaf4-4e08-4206-9dd2-82ab80507450-kube-api-access-9cm7h\") pod \"collect-profiles-29493570-rnmdz\" (UID: \"e4aacaf4-4e08-4206-9dd2-82ab80507450\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-rnmdz" Jan 28 15:30:00 crc kubenswrapper[4871]: I0128 15:30:00.341753 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e4aacaf4-4e08-4206-9dd2-82ab80507450-secret-volume\") pod \"collect-profiles-29493570-rnmdz\" (UID: \"e4aacaf4-4e08-4206-9dd2-82ab80507450\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-rnmdz" Jan 28 15:30:00 crc kubenswrapper[4871]: I0128 15:30:00.341853 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4aacaf4-4e08-4206-9dd2-82ab80507450-config-volume\") pod \"collect-profiles-29493570-rnmdz\" (UID: \"e4aacaf4-4e08-4206-9dd2-82ab80507450\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-rnmdz" Jan 28 15:30:00 crc kubenswrapper[4871]: I0128 15:30:00.341908 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cm7h\" (UniqueName: \"kubernetes.io/projected/e4aacaf4-4e08-4206-9dd2-82ab80507450-kube-api-access-9cm7h\") pod \"collect-profiles-29493570-rnmdz\" (UID: \"e4aacaf4-4e08-4206-9dd2-82ab80507450\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-rnmdz" Jan 28 15:30:00 crc kubenswrapper[4871]: I0128 15:30:00.343183 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4aacaf4-4e08-4206-9dd2-82ab80507450-config-volume\") pod \"collect-profiles-29493570-rnmdz\" (UID: \"e4aacaf4-4e08-4206-9dd2-82ab80507450\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-rnmdz" Jan 28 15:30:00 crc kubenswrapper[4871]: I0128 15:30:00.348897 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e4aacaf4-4e08-4206-9dd2-82ab80507450-secret-volume\") pod \"collect-profiles-29493570-rnmdz\" (UID: \"e4aacaf4-4e08-4206-9dd2-82ab80507450\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-rnmdz" Jan 28 15:30:00 crc kubenswrapper[4871]: I0128 15:30:00.361081 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cm7h\" (UniqueName: \"kubernetes.io/projected/e4aacaf4-4e08-4206-9dd2-82ab80507450-kube-api-access-9cm7h\") pod \"collect-profiles-29493570-rnmdz\" (UID: \"e4aacaf4-4e08-4206-9dd2-82ab80507450\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-rnmdz" Jan 28 15:30:00 crc kubenswrapper[4871]: I0128 15:30:00.488260 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-rnmdz" Jan 28 15:30:00 crc kubenswrapper[4871]: I0128 15:30:00.678710 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493570-rnmdz"] Jan 28 15:30:01 crc kubenswrapper[4871]: I0128 15:30:01.409931 4871 generic.go:334] "Generic (PLEG): container finished" podID="e4aacaf4-4e08-4206-9dd2-82ab80507450" containerID="bd37983a281f7536f83d348d70f13ea6d182db398e5b3d9c63b173e0875ec3f9" exitCode=0 Jan 28 15:30:01 crc kubenswrapper[4871]: I0128 15:30:01.409988 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-rnmdz" event={"ID":"e4aacaf4-4e08-4206-9dd2-82ab80507450","Type":"ContainerDied","Data":"bd37983a281f7536f83d348d70f13ea6d182db398e5b3d9c63b173e0875ec3f9"} Jan 28 15:30:01 crc kubenswrapper[4871]: I0128 15:30:01.410252 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-rnmdz" event={"ID":"e4aacaf4-4e08-4206-9dd2-82ab80507450","Type":"ContainerStarted","Data":"8addb7620e4383e1980b4eb0cc83279641a276407b311c4b25740dcccbbf2697"} Jan 28 15:30:02 crc kubenswrapper[4871]: I0128 15:30:02.619569 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-rnmdz" Jan 28 15:30:02 crc kubenswrapper[4871]: I0128 15:30:02.772287 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cm7h\" (UniqueName: \"kubernetes.io/projected/e4aacaf4-4e08-4206-9dd2-82ab80507450-kube-api-access-9cm7h\") pod \"e4aacaf4-4e08-4206-9dd2-82ab80507450\" (UID: \"e4aacaf4-4e08-4206-9dd2-82ab80507450\") " Jan 28 15:30:02 crc kubenswrapper[4871]: I0128 15:30:02.772354 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e4aacaf4-4e08-4206-9dd2-82ab80507450-secret-volume\") pod \"e4aacaf4-4e08-4206-9dd2-82ab80507450\" (UID: \"e4aacaf4-4e08-4206-9dd2-82ab80507450\") " Jan 28 15:30:02 crc kubenswrapper[4871]: I0128 15:30:02.772386 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4aacaf4-4e08-4206-9dd2-82ab80507450-config-volume\") pod \"e4aacaf4-4e08-4206-9dd2-82ab80507450\" (UID: \"e4aacaf4-4e08-4206-9dd2-82ab80507450\") " Jan 28 15:30:02 crc kubenswrapper[4871]: I0128 15:30:02.773425 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4aacaf4-4e08-4206-9dd2-82ab80507450-config-volume" (OuterVolumeSpecName: "config-volume") pod "e4aacaf4-4e08-4206-9dd2-82ab80507450" (UID: "e4aacaf4-4e08-4206-9dd2-82ab80507450"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:30:02 crc kubenswrapper[4871]: I0128 15:30:02.777627 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4aacaf4-4e08-4206-9dd2-82ab80507450-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e4aacaf4-4e08-4206-9dd2-82ab80507450" (UID: "e4aacaf4-4e08-4206-9dd2-82ab80507450"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:30:02 crc kubenswrapper[4871]: I0128 15:30:02.777682 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4aacaf4-4e08-4206-9dd2-82ab80507450-kube-api-access-9cm7h" (OuterVolumeSpecName: "kube-api-access-9cm7h") pod "e4aacaf4-4e08-4206-9dd2-82ab80507450" (UID: "e4aacaf4-4e08-4206-9dd2-82ab80507450"). InnerVolumeSpecName "kube-api-access-9cm7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:30:02 crc kubenswrapper[4871]: I0128 15:30:02.874562 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cm7h\" (UniqueName: \"kubernetes.io/projected/e4aacaf4-4e08-4206-9dd2-82ab80507450-kube-api-access-9cm7h\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:02 crc kubenswrapper[4871]: I0128 15:30:02.874639 4871 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e4aacaf4-4e08-4206-9dd2-82ab80507450-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:02 crc kubenswrapper[4871]: I0128 15:30:02.874654 4871 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4aacaf4-4e08-4206-9dd2-82ab80507450-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:03 crc kubenswrapper[4871]: I0128 15:30:03.423399 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-rnmdz" event={"ID":"e4aacaf4-4e08-4206-9dd2-82ab80507450","Type":"ContainerDied","Data":"8addb7620e4383e1980b4eb0cc83279641a276407b311c4b25740dcccbbf2697"} Jan 28 15:30:03 crc kubenswrapper[4871]: I0128 15:30:03.423447 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8addb7620e4383e1980b4eb0cc83279641a276407b311c4b25740dcccbbf2697" Jan 28 15:30:03 crc kubenswrapper[4871]: I0128 15:30:03.423520 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-rnmdz" Jan 28 15:30:03 crc kubenswrapper[4871]: I0128 15:30:03.479214 4871 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 15:30:06 crc kubenswrapper[4871]: I0128 15:30:06.922361 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m7j2k"] Jan 28 15:30:06 crc kubenswrapper[4871]: E0128 15:30:06.922871 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4aacaf4-4e08-4206-9dd2-82ab80507450" containerName="collect-profiles" Jan 28 15:30:06 crc kubenswrapper[4871]: I0128 15:30:06.922883 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4aacaf4-4e08-4206-9dd2-82ab80507450" containerName="collect-profiles" Jan 28 15:30:06 crc kubenswrapper[4871]: I0128 15:30:06.922976 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4aacaf4-4e08-4206-9dd2-82ab80507450" containerName="collect-profiles" Jan 28 15:30:06 crc kubenswrapper[4871]: I0128 15:30:06.923674 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m7j2k" Jan 28 15:30:06 crc kubenswrapper[4871]: I0128 15:30:06.939164 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m7j2k"] Jan 28 15:30:07 crc kubenswrapper[4871]: I0128 15:30:07.025461 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed52af4-6aa4-49d9-8027-92e410e6d536-utilities\") pod \"redhat-operators-m7j2k\" (UID: \"2ed52af4-6aa4-49d9-8027-92e410e6d536\") " pod="openshift-marketplace/redhat-operators-m7j2k" Jan 28 15:30:07 crc kubenswrapper[4871]: I0128 15:30:07.025541 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h44mm\" (UniqueName: \"kubernetes.io/projected/2ed52af4-6aa4-49d9-8027-92e410e6d536-kube-api-access-h44mm\") pod \"redhat-operators-m7j2k\" (UID: \"2ed52af4-6aa4-49d9-8027-92e410e6d536\") " pod="openshift-marketplace/redhat-operators-m7j2k" Jan 28 15:30:07 crc kubenswrapper[4871]: I0128 15:30:07.025569 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed52af4-6aa4-49d9-8027-92e410e6d536-catalog-content\") pod \"redhat-operators-m7j2k\" (UID: \"2ed52af4-6aa4-49d9-8027-92e410e6d536\") " pod="openshift-marketplace/redhat-operators-m7j2k" Jan 28 15:30:07 crc kubenswrapper[4871]: I0128 15:30:07.126374 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed52af4-6aa4-49d9-8027-92e410e6d536-utilities\") pod \"redhat-operators-m7j2k\" (UID: \"2ed52af4-6aa4-49d9-8027-92e410e6d536\") " pod="openshift-marketplace/redhat-operators-m7j2k" Jan 28 15:30:07 crc kubenswrapper[4871]: I0128 15:30:07.126433 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h44mm\" (UniqueName: \"kubernetes.io/projected/2ed52af4-6aa4-49d9-8027-92e410e6d536-kube-api-access-h44mm\") pod \"redhat-operators-m7j2k\" (UID: \"2ed52af4-6aa4-49d9-8027-92e410e6d536\") " pod="openshift-marketplace/redhat-operators-m7j2k" Jan 28 15:30:07 crc kubenswrapper[4871]: I0128 15:30:07.126454 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed52af4-6aa4-49d9-8027-92e410e6d536-catalog-content\") pod \"redhat-operators-m7j2k\" (UID: \"2ed52af4-6aa4-49d9-8027-92e410e6d536\") " pod="openshift-marketplace/redhat-operators-m7j2k" Jan 28 15:30:07 crc kubenswrapper[4871]: I0128 15:30:07.126875 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed52af4-6aa4-49d9-8027-92e410e6d536-catalog-content\") pod \"redhat-operators-m7j2k\" (UID: \"2ed52af4-6aa4-49d9-8027-92e410e6d536\") " pod="openshift-marketplace/redhat-operators-m7j2k" Jan 28 15:30:07 crc kubenswrapper[4871]: I0128 15:30:07.126909 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed52af4-6aa4-49d9-8027-92e410e6d536-utilities\") pod \"redhat-operators-m7j2k\" (UID: \"2ed52af4-6aa4-49d9-8027-92e410e6d536\") " pod="openshift-marketplace/redhat-operators-m7j2k" Jan 28 15:30:07 crc kubenswrapper[4871]: I0128 15:30:07.147342 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h44mm\" (UniqueName: \"kubernetes.io/projected/2ed52af4-6aa4-49d9-8027-92e410e6d536-kube-api-access-h44mm\") pod \"redhat-operators-m7j2k\" (UID: \"2ed52af4-6aa4-49d9-8027-92e410e6d536\") " pod="openshift-marketplace/redhat-operators-m7j2k" Jan 28 15:30:07 crc kubenswrapper[4871]: I0128 15:30:07.246320 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m7j2k" Jan 28 15:30:07 crc kubenswrapper[4871]: I0128 15:30:07.459499 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m7j2k"] Jan 28 15:30:08 crc kubenswrapper[4871]: I0128 15:30:08.451209 4871 generic.go:334] "Generic (PLEG): container finished" podID="2ed52af4-6aa4-49d9-8027-92e410e6d536" containerID="b570c2d25f996236a40499888a14fc297c9bef74066fafd34fe8a360d866ae10" exitCode=0 Jan 28 15:30:08 crc kubenswrapper[4871]: I0128 15:30:08.452458 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m7j2k" event={"ID":"2ed52af4-6aa4-49d9-8027-92e410e6d536","Type":"ContainerDied","Data":"b570c2d25f996236a40499888a14fc297c9bef74066fafd34fe8a360d866ae10"} Jan 28 15:30:08 crc kubenswrapper[4871]: I0128 15:30:08.452564 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m7j2k" event={"ID":"2ed52af4-6aa4-49d9-8027-92e410e6d536","Type":"ContainerStarted","Data":"ac6bd2f3cb7d90652eb319fa45d94c2f0f0d3068cc075443ea740d2240b389ef"} Jan 28 15:30:08 crc kubenswrapper[4871]: I0128 15:30:08.454881 4871 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:30:09 crc kubenswrapper[4871]: I0128 15:30:09.459642 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m7j2k" event={"ID":"2ed52af4-6aa4-49d9-8027-92e410e6d536","Type":"ContainerStarted","Data":"f030c32be1c4bd8c2b5743f1ca311763c477815a976015aa68a1a82496bd4738"} Jan 28 15:30:10 crc kubenswrapper[4871]: I0128 15:30:10.465954 4871 generic.go:334] "Generic (PLEG): container finished" podID="2ed52af4-6aa4-49d9-8027-92e410e6d536" containerID="f030c32be1c4bd8c2b5743f1ca311763c477815a976015aa68a1a82496bd4738" exitCode=0 Jan 28 15:30:10 crc kubenswrapper[4871]: I0128 15:30:10.465986 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m7j2k" event={"ID":"2ed52af4-6aa4-49d9-8027-92e410e6d536","Type":"ContainerDied","Data":"f030c32be1c4bd8c2b5743f1ca311763c477815a976015aa68a1a82496bd4738"} Jan 28 15:30:11 crc kubenswrapper[4871]: I0128 15:30:11.475451 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m7j2k" event={"ID":"2ed52af4-6aa4-49d9-8027-92e410e6d536","Type":"ContainerStarted","Data":"83da4f0683d6656c51f62fa5ecc7b7ca031ad549032ddac8fec5df692aaf281d"} Jan 28 15:30:11 crc kubenswrapper[4871]: I0128 15:30:11.506014 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m7j2k" podStartSLOduration=3.043229532 podStartE2EDuration="5.505990968s" podCreationTimestamp="2026-01-28 15:30:06 +0000 UTC" firstStartedPulling="2026-01-28 15:30:08.454707083 +0000 UTC m=+760.350545405" lastFinishedPulling="2026-01-28 15:30:10.917468519 +0000 UTC m=+762.813306841" observedRunningTime="2026-01-28 15:30:11.500468953 +0000 UTC m=+763.396307335" watchObservedRunningTime="2026-01-28 15:30:11.505990968 +0000 UTC m=+763.401829330" Jan 28 15:30:17 crc kubenswrapper[4871]: I0128 15:30:17.246840 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m7j2k" Jan 28 15:30:17 crc kubenswrapper[4871]: I0128 15:30:17.249145 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m7j2k" Jan 28 15:30:17 crc kubenswrapper[4871]: I0128 15:30:17.287443 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m7j2k" Jan 28 15:30:17 crc kubenswrapper[4871]: I0128 15:30:17.544458 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m7j2k" Jan 28 15:30:17 crc kubenswrapper[4871]: I0128 15:30:17.597433 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m7j2k"] Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.293900 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fn5bb"] Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.294703 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="ovn-controller" containerID="cri-o://b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f" gracePeriod=30 Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.294972 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca" gracePeriod=30 Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.294997 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="ovn-acl-logging" containerID="cri-o://7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110" gracePeriod=30 Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.295001 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="kube-rbac-proxy-node" containerID="cri-o://aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655" gracePeriod=30 Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.295085 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="nbdb" containerID="cri-o://34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2" gracePeriod=30 Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.295015 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="sbdb" containerID="cri-o://4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039" gracePeriod=30 Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.294981 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="northd" containerID="cri-o://b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f" gracePeriod=30 Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.339476 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="ovnkube-controller" containerID="cri-o://7112857361fc94a800dbe93a58be9c315eea4e68600708fd11b6a78854fae1b7" gracePeriod=30 Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.520551 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovnkube-controller/3.log" Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.522522 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovn-acl-logging/0.log" Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.523061 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovn-controller/0.log" Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.523403 4871 generic.go:334] "Generic (PLEG): container finished" podID="178343c8-b657-4440-953e-6daef3609145" containerID="7112857361fc94a800dbe93a58be9c315eea4e68600708fd11b6a78854fae1b7" exitCode=0 Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.523432 4871 generic.go:334] "Generic (PLEG): container finished" podID="178343c8-b657-4440-953e-6daef3609145" containerID="9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca" exitCode=0 Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.523442 4871 generic.go:334] "Generic (PLEG): container finished" podID="178343c8-b657-4440-953e-6daef3609145" containerID="aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655" exitCode=0 Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.523450 4871 generic.go:334] "Generic (PLEG): container finished" podID="178343c8-b657-4440-953e-6daef3609145" containerID="7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110" exitCode=143 Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.523456 4871 generic.go:334] "Generic (PLEG): container finished" podID="178343c8-b657-4440-953e-6daef3609145" containerID="b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f" exitCode=143 Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.523520 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerDied","Data":"7112857361fc94a800dbe93a58be9c315eea4e68600708fd11b6a78854fae1b7"} Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.523629 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerDied","Data":"9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca"} Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.523647 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerDied","Data":"aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655"} Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.523660 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerDied","Data":"7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110"} Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.523676 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerDied","Data":"b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f"} Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.523686 4871 scope.go:117] "RemoveContainer" containerID="7d0c2a2b38d9ad0ac6e4219b2ad68723992f3980697ca73508a0956647e966da" Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.524939 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-45mlg_d1955ba7-b91c-41de-97b7-188922cc0907/kube-multus/2.log" Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.525353 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-45mlg_d1955ba7-b91c-41de-97b7-188922cc0907/kube-multus/1.log" Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.525390 4871 generic.go:334] "Generic (PLEG): container finished" podID="d1955ba7-b91c-41de-97b7-188922cc0907" containerID="fece7b3fc90f5bd50df75ec40f6120278e747875e17b225537e037d57b4eed3f" exitCode=2 Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.525464 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-45mlg" event={"ID":"d1955ba7-b91c-41de-97b7-188922cc0907","Type":"ContainerDied","Data":"fece7b3fc90f5bd50df75ec40f6120278e747875e17b225537e037d57b4eed3f"} Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.525550 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m7j2k" podUID="2ed52af4-6aa4-49d9-8027-92e410e6d536" containerName="registry-server" containerID="cri-o://83da4f0683d6656c51f62fa5ecc7b7ca031ad549032ddac8fec5df692aaf281d" gracePeriod=2 Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.525952 4871 scope.go:117] "RemoveContainer" containerID="fece7b3fc90f5bd50df75ec40f6120278e747875e17b225537e037d57b4eed3f" Jan 28 15:30:19 crc kubenswrapper[4871]: I0128 15:30:19.630014 4871 scope.go:117] "RemoveContainer" containerID="27c2298a7ba740a339e0cf8710c12bd89e613c3450bf9bbc1fdbf21d93e3da41" Jan 28 15:30:20 crc kubenswrapper[4871]: I0128 15:30:20.543516 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovn-acl-logging/0.log" Jan 28 15:30:20 crc kubenswrapper[4871]: I0128 15:30:20.544918 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovn-controller/0.log" Jan 28 15:30:20 crc kubenswrapper[4871]: I0128 15:30:20.548553 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-45mlg_d1955ba7-b91c-41de-97b7-188922cc0907/kube-multus/2.log" Jan 28 15:30:20 crc kubenswrapper[4871]: I0128 15:30:20.548702 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-45mlg" event={"ID":"d1955ba7-b91c-41de-97b7-188922cc0907","Type":"ContainerStarted","Data":"9ebdaa6fd3f83af12677c9edb9889f33f2019e95cf4873e4c6b6380ed2f950c4"} Jan 28 15:30:20 crc kubenswrapper[4871]: E0128 15:30:20.644076 4871 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039 is running failed: container process not found" containerID="4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 28 15:30:20 crc kubenswrapper[4871]: E0128 15:30:20.644172 4871 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2 is running failed: container process not found" containerID="34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 28 15:30:20 crc kubenswrapper[4871]: E0128 15:30:20.644287 4871 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039 is running failed: container process not found" containerID="4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 28 15:30:20 crc kubenswrapper[4871]: E0128 15:30:20.644525 4871 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2 is running failed: container process not found" containerID="34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 28 15:30:20 crc kubenswrapper[4871]: E0128 15:30:20.644792 4871 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039 is running failed: container process not found" containerID="4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 28 15:30:20 crc kubenswrapper[4871]: E0128 15:30:20.644795 4871 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2 is running failed: container process not found" containerID="34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 28 15:30:20 crc kubenswrapper[4871]: E0128 15:30:20.644814 4871 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="sbdb" Jan 28 15:30:20 crc kubenswrapper[4871]: E0128 15:30:20.644829 4871 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="nbdb" Jan 28 15:30:21 crc kubenswrapper[4871]: I0128 15:30:21.559422 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovn-acl-logging/0.log" Jan 28 15:30:21 crc kubenswrapper[4871]: I0128 15:30:21.560213 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovn-controller/0.log" Jan 28 15:30:21 crc kubenswrapper[4871]: I0128 15:30:21.560744 4871 generic.go:334] "Generic (PLEG): container finished" podID="178343c8-b657-4440-953e-6daef3609145" containerID="4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039" exitCode=0 Jan 28 15:30:21 crc kubenswrapper[4871]: I0128 15:30:21.560804 4871 generic.go:334] "Generic (PLEG): container finished" podID="178343c8-b657-4440-953e-6daef3609145" containerID="34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2" exitCode=0 Jan 28 15:30:21 crc kubenswrapper[4871]: I0128 15:30:21.560819 4871 generic.go:334] "Generic (PLEG): container finished" podID="178343c8-b657-4440-953e-6daef3609145" containerID="b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f" exitCode=0 Jan 28 15:30:21 crc kubenswrapper[4871]: I0128 15:30:21.560809 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerDied","Data":"4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039"} Jan 28 15:30:21 crc kubenswrapper[4871]: I0128 15:30:21.560859 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerDied","Data":"34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2"} Jan 28 15:30:21 crc kubenswrapper[4871]: I0128 15:30:21.560872 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerDied","Data":"b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f"} Jan 28 15:30:21 crc kubenswrapper[4871]: I0128 15:30:21.563538 4871 generic.go:334] "Generic (PLEG): container finished" podID="2ed52af4-6aa4-49d9-8027-92e410e6d536" containerID="83da4f0683d6656c51f62fa5ecc7b7ca031ad549032ddac8fec5df692aaf281d" exitCode=0 Jan 28 15:30:21 crc kubenswrapper[4871]: I0128 15:30:21.563600 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m7j2k" event={"ID":"2ed52af4-6aa4-49d9-8027-92e410e6d536","Type":"ContainerDied","Data":"83da4f0683d6656c51f62fa5ecc7b7ca031ad549032ddac8fec5df692aaf281d"} Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.434985 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovn-acl-logging/0.log" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.435351 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m7j2k" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.435812 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovn-controller/0.log" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.436346 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534119 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed52af4-6aa4-49d9-8027-92e410e6d536-utilities\") pod \"2ed52af4-6aa4-49d9-8027-92e410e6d536\" (UID: \"2ed52af4-6aa4-49d9-8027-92e410e6d536\") " Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534174 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-node-log\") pod \"178343c8-b657-4440-953e-6daef3609145\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534214 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/178343c8-b657-4440-953e-6daef3609145-env-overrides\") pod \"178343c8-b657-4440-953e-6daef3609145\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534237 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-kubelet\") pod \"178343c8-b657-4440-953e-6daef3609145\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534252 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-run-netns\") pod \"178343c8-b657-4440-953e-6daef3609145\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534274 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-slash\") pod \"178343c8-b657-4440-953e-6daef3609145\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534303 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-run-systemd\") pod \"178343c8-b657-4440-953e-6daef3609145\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534318 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-run-openvswitch\") pod \"178343c8-b657-4440-953e-6daef3609145\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534333 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-cni-bin\") pod \"178343c8-b657-4440-953e-6daef3609145\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534386 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "178343c8-b657-4440-953e-6daef3609145" (UID: "178343c8-b657-4440-953e-6daef3609145"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534398 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-node-log" (OuterVolumeSpecName: "node-log") pod "178343c8-b657-4440-953e-6daef3609145" (UID: "178343c8-b657-4440-953e-6daef3609145"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534430 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "178343c8-b657-4440-953e-6daef3609145" (UID: "178343c8-b657-4440-953e-6daef3609145"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534416 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "178343c8-b657-4440-953e-6daef3609145" (UID: "178343c8-b657-4440-953e-6daef3609145"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534448 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-slash" (OuterVolumeSpecName: "host-slash") pod "178343c8-b657-4440-953e-6daef3609145" (UID: "178343c8-b657-4440-953e-6daef3609145"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534467 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "178343c8-b657-4440-953e-6daef3609145" (UID: "178343c8-b657-4440-953e-6daef3609145"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534566 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-systemd-units\") pod \"178343c8-b657-4440-953e-6daef3609145\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534629 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-run-ovn\") pod \"178343c8-b657-4440-953e-6daef3609145\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534650 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-etc-openvswitch\") pod \"178343c8-b657-4440-953e-6daef3609145\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534666 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-cni-netd\") pod \"178343c8-b657-4440-953e-6daef3609145\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534694 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/178343c8-b657-4440-953e-6daef3609145-ovnkube-script-lib\") pod \"178343c8-b657-4440-953e-6daef3609145\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534714 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-var-lib-cni-networks-ovn-kubernetes\") pod \"178343c8-b657-4440-953e-6daef3609145\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534743 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed52af4-6aa4-49d9-8027-92e410e6d536-catalog-content\") pod \"2ed52af4-6aa4-49d9-8027-92e410e6d536\" (UID: \"2ed52af4-6aa4-49d9-8027-92e410e6d536\") " Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534751 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/178343c8-b657-4440-953e-6daef3609145-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "178343c8-b657-4440-953e-6daef3609145" (UID: "178343c8-b657-4440-953e-6daef3609145"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534772 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-run-ovn-kubernetes\") pod \"178343c8-b657-4440-953e-6daef3609145\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534845 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-log-socket\") pod \"178343c8-b657-4440-953e-6daef3609145\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534891 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtss7\" (UniqueName: \"kubernetes.io/projected/178343c8-b657-4440-953e-6daef3609145-kube-api-access-rtss7\") pod \"178343c8-b657-4440-953e-6daef3609145\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534916 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-var-lib-openvswitch\") pod \"178343c8-b657-4440-953e-6daef3609145\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534949 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/178343c8-b657-4440-953e-6daef3609145-ovnkube-config\") pod \"178343c8-b657-4440-953e-6daef3609145\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534971 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/178343c8-b657-4440-953e-6daef3609145-ovn-node-metrics-cert\") pod \"178343c8-b657-4440-953e-6daef3609145\" (UID: \"178343c8-b657-4440-953e-6daef3609145\") " Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534994 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h44mm\" (UniqueName: \"kubernetes.io/projected/2ed52af4-6aa4-49d9-8027-92e410e6d536-kube-api-access-h44mm\") pod \"2ed52af4-6aa4-49d9-8027-92e410e6d536\" (UID: \"2ed52af4-6aa4-49d9-8027-92e410e6d536\") " Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534792 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "178343c8-b657-4440-953e-6daef3609145" (UID: "178343c8-b657-4440-953e-6daef3609145"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.535193 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/178343c8-b657-4440-953e-6daef3609145-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "178343c8-b657-4440-953e-6daef3609145" (UID: "178343c8-b657-4440-953e-6daef3609145"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534806 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "178343c8-b657-4440-953e-6daef3609145" (UID: "178343c8-b657-4440-953e-6daef3609145"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534818 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "178343c8-b657-4440-953e-6daef3609145" (UID: "178343c8-b657-4440-953e-6daef3609145"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534831 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "178343c8-b657-4440-953e-6daef3609145" (UID: "178343c8-b657-4440-953e-6daef3609145"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.534843 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "178343c8-b657-4440-953e-6daef3609145" (UID: "178343c8-b657-4440-953e-6daef3609145"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.535089 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ed52af4-6aa4-49d9-8027-92e410e6d536-utilities" (OuterVolumeSpecName: "utilities") pod "2ed52af4-6aa4-49d9-8027-92e410e6d536" (UID: "2ed52af4-6aa4-49d9-8027-92e410e6d536"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.535241 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "178343c8-b657-4440-953e-6daef3609145" (UID: "178343c8-b657-4440-953e-6daef3609145"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.535296 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "178343c8-b657-4440-953e-6daef3609145" (UID: "178343c8-b657-4440-953e-6daef3609145"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.535428 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-log-socket" (OuterVolumeSpecName: "log-socket") pod "178343c8-b657-4440-953e-6daef3609145" (UID: "178343c8-b657-4440-953e-6daef3609145"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.535705 4871 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-slash\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.535726 4871 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.535741 4871 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.535753 4871 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.535765 4871 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.535777 4871 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.535788 4871 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.535800 4871 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/178343c8-b657-4440-953e-6daef3609145-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.535813 4871 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.535828 4871 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.535841 4871 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-log-socket\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.535853 4871 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.535864 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed52af4-6aa4-49d9-8027-92e410e6d536-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.535875 4871 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-node-log\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.535886 4871 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/178343c8-b657-4440-953e-6daef3609145-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.535898 4871 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.535909 4871 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.536442 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/178343c8-b657-4440-953e-6daef3609145-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "178343c8-b657-4440-953e-6daef3609145" (UID: "178343c8-b657-4440-953e-6daef3609145"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.539689 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ed52af4-6aa4-49d9-8027-92e410e6d536-kube-api-access-h44mm" (OuterVolumeSpecName: "kube-api-access-h44mm") pod "2ed52af4-6aa4-49d9-8027-92e410e6d536" (UID: "2ed52af4-6aa4-49d9-8027-92e410e6d536"). InnerVolumeSpecName "kube-api-access-h44mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.540279 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/178343c8-b657-4440-953e-6daef3609145-kube-api-access-rtss7" (OuterVolumeSpecName: "kube-api-access-rtss7") pod "178343c8-b657-4440-953e-6daef3609145" (UID: "178343c8-b657-4440-953e-6daef3609145"). InnerVolumeSpecName "kube-api-access-rtss7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.540355 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4rlrr"] Jan 28 15:30:22 crc kubenswrapper[4871]: E0128 15:30:22.540569 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="sbdb" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.540581 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="sbdb" Jan 28 15:30:22 crc kubenswrapper[4871]: E0128 15:30:22.540609 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="ovn-acl-logging" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.540617 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="ovn-acl-logging" Jan 28 15:30:22 crc kubenswrapper[4871]: E0128 15:30:22.540630 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed52af4-6aa4-49d9-8027-92e410e6d536" containerName="registry-server" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.540637 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed52af4-6aa4-49d9-8027-92e410e6d536" containerName="registry-server" Jan 28 15:30:22 crc kubenswrapper[4871]: E0128 15:30:22.540646 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="ovnkube-controller" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.540653 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="ovnkube-controller" Jan 28 15:30:22 crc kubenswrapper[4871]: E0128 15:30:22.540663 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="kubecfg-setup" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.540669 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="kubecfg-setup" Jan 28 15:30:22 crc kubenswrapper[4871]: E0128 15:30:22.540676 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed52af4-6aa4-49d9-8027-92e410e6d536" containerName="extract-content" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.540683 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed52af4-6aa4-49d9-8027-92e410e6d536" containerName="extract-content" Jan 28 15:30:22 crc kubenswrapper[4871]: E0128 15:30:22.540692 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="northd" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.540699 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="northd" Jan 28 15:30:22 crc kubenswrapper[4871]: E0128 15:30:22.540710 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="ovnkube-controller" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.540718 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="ovnkube-controller" Jan 28 15:30:22 crc kubenswrapper[4871]: E0128 15:30:22.540729 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="kube-rbac-proxy-node" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.540736 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="kube-rbac-proxy-node" Jan 28 15:30:22 crc kubenswrapper[4871]: E0128 15:30:22.540747 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="ovn-controller" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.540754 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="ovn-controller" Jan 28 15:30:22 crc kubenswrapper[4871]: E0128 15:30:22.540765 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed52af4-6aa4-49d9-8027-92e410e6d536" containerName="extract-utilities" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.540772 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed52af4-6aa4-49d9-8027-92e410e6d536" containerName="extract-utilities" Jan 28 15:30:22 crc kubenswrapper[4871]: E0128 15:30:22.540781 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="ovnkube-controller" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.540788 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="ovnkube-controller" Jan 28 15:30:22 crc kubenswrapper[4871]: E0128 15:30:22.540795 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.540802 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 15:30:22 crc kubenswrapper[4871]: E0128 15:30:22.540829 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="nbdb" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.540835 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="nbdb" Jan 28 15:30:22 crc kubenswrapper[4871]: E0128 15:30:22.540842 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="ovnkube-controller" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.540849 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="ovnkube-controller" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.540942 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="ovnkube-controller" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.540957 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="ovnkube-controller" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.540966 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="ovn-controller" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.540978 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ed52af4-6aa4-49d9-8027-92e410e6d536" containerName="registry-server" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.540985 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="northd" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.540994 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="ovnkube-controller" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.541002 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.541011 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="kube-rbac-proxy-node" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.541019 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="sbdb" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.541029 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="ovn-acl-logging" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.541041 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="nbdb" Jan 28 15:30:22 crc kubenswrapper[4871]: E0128 15:30:22.541143 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="ovnkube-controller" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.541151 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="ovnkube-controller" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.541274 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="ovnkube-controller" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.541287 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="178343c8-b657-4440-953e-6daef3609145" containerName="ovnkube-controller" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.543542 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.546620 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/178343c8-b657-4440-953e-6daef3609145-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "178343c8-b657-4440-953e-6daef3609145" (UID: "178343c8-b657-4440-953e-6daef3609145"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.556460 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "178343c8-b657-4440-953e-6daef3609145" (UID: "178343c8-b657-4440-953e-6daef3609145"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.576486 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovn-acl-logging/0.log" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.576923 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn5bb_178343c8-b657-4440-953e-6daef3609145/ovn-controller/0.log" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.577220 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" event={"ID":"178343c8-b657-4440-953e-6daef3609145","Type":"ContainerDied","Data":"39fb03a8f95777e5196c225b31c659159448fed380393e101192906a41713bd5"} Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.577252 4871 scope.go:117] "RemoveContainer" containerID="7112857361fc94a800dbe93a58be9c315eea4e68600708fd11b6a78854fae1b7" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.577365 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fn5bb" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.582301 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m7j2k" event={"ID":"2ed52af4-6aa4-49d9-8027-92e410e6d536","Type":"ContainerDied","Data":"ac6bd2f3cb7d90652eb319fa45d94c2f0f0d3068cc075443ea740d2240b389ef"} Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.582366 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m7j2k" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.597519 4871 scope.go:117] "RemoveContainer" containerID="4db5a49d0408a69a69e61765855019d3f7d101037871b1447b0619e0b9ea0039" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.613255 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fn5bb"] Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.617685 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fn5bb"] Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.619129 4871 scope.go:117] "RemoveContainer" containerID="34a4a355c825cea088d1f0d914a16561372d1a538db3410362be8307c8ea41b2" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.631037 4871 scope.go:117] "RemoveContainer" containerID="b78fa02a8f89876d2004d235492f6b8ea5def73dafdaa88d96e31ac7d0ceec1f" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.636771 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/465ae7e4-effc-4782-a230-fc76e3881d0e-ovn-node-metrics-cert\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.636913 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/465ae7e4-effc-4782-a230-fc76e3881d0e-env-overrides\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.637001 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-host-cni-netd\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.637088 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-var-lib-openvswitch\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.637172 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-host-slash\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.637256 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-run-ovn\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.637338 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/465ae7e4-effc-4782-a230-fc76e3881d0e-ovnkube-config\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.637417 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twwkl\" (UniqueName: \"kubernetes.io/projected/465ae7e4-effc-4782-a230-fc76e3881d0e-kube-api-access-twwkl\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.637497 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-systemd-units\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.637572 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-run-openvswitch\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.637682 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-host-kubelet\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.637761 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-node-log\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.637886 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.637980 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/465ae7e4-effc-4782-a230-fc76e3881d0e-ovnkube-script-lib\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.638060 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-run-systemd\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.638148 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-etc-openvswitch\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.638229 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-host-run-netns\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.638311 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-log-socket\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.638395 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-host-run-ovn-kubernetes\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.638476 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-host-cni-bin\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.638605 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtss7\" (UniqueName: \"kubernetes.io/projected/178343c8-b657-4440-953e-6daef3609145-kube-api-access-rtss7\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.638679 4871 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/178343c8-b657-4440-953e-6daef3609145-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.638745 4871 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/178343c8-b657-4440-953e-6daef3609145-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.638812 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h44mm\" (UniqueName: \"kubernetes.io/projected/2ed52af4-6aa4-49d9-8027-92e410e6d536-kube-api-access-h44mm\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.638942 4871 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/178343c8-b657-4440-953e-6daef3609145-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.641763 4871 scope.go:117] "RemoveContainer" containerID="9261491409a0279d8a68c392224aef839f521d19d042bcd9acdb7c6a15ded9ca" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.643717 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ed52af4-6aa4-49d9-8027-92e410e6d536-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ed52af4-6aa4-49d9-8027-92e410e6d536" (UID: "2ed52af4-6aa4-49d9-8027-92e410e6d536"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.654459 4871 scope.go:117] "RemoveContainer" containerID="aa8fe6151baa4ca89312b90512d2b0f9267995808b49a3c4c0fe41d5aa496655" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.672676 4871 scope.go:117] "RemoveContainer" containerID="7d01e0057a29da4b02779e71fa890cc415585e3791aeff801a331fbc6076d110" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.688040 4871 scope.go:117] "RemoveContainer" containerID="b9a3cb6511e92befaeca0a6b5842af535bf7d100db1a0d8d181ac29838f7108f" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.699316 4871 scope.go:117] "RemoveContainer" containerID="411da6c079c11b5c77da0d765661662246fbe5b164a398c5ec8eca30c1241c8a" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.712851 4871 scope.go:117] "RemoveContainer" containerID="83da4f0683d6656c51f62fa5ecc7b7ca031ad549032ddac8fec5df692aaf281d" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.732347 4871 scope.go:117] "RemoveContainer" containerID="f030c32be1c4bd8c2b5743f1ca311763c477815a976015aa68a1a82496bd4738" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.740413 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/465ae7e4-effc-4782-a230-fc76e3881d0e-env-overrides\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.740447 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/465ae7e4-effc-4782-a230-fc76e3881d0e-ovn-node-metrics-cert\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.740471 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-host-cni-netd\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.740641 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-var-lib-openvswitch\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.740662 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-host-slash\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.740841 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-run-ovn\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.740691 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-var-lib-openvswitch\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.740782 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-host-slash\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.740920 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-run-ovn\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.740657 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-host-cni-netd\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741109 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/465ae7e4-effc-4782-a230-fc76e3881d0e-ovnkube-config\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741146 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twwkl\" (UniqueName: \"kubernetes.io/projected/465ae7e4-effc-4782-a230-fc76e3881d0e-kube-api-access-twwkl\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741164 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-systemd-units\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741179 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-run-openvswitch\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741198 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-host-kubelet\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741213 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-node-log\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741235 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741260 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/465ae7e4-effc-4782-a230-fc76e3881d0e-ovnkube-script-lib\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741137 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/465ae7e4-effc-4782-a230-fc76e3881d0e-env-overrides\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741276 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-run-systemd\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741281 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-host-kubelet\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741320 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-run-systemd\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741329 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-etc-openvswitch\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741317 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-run-openvswitch\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741345 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741258 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-systemd-units\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741376 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-node-log\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741382 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-etc-openvswitch\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741401 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-host-run-netns\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741421 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-host-run-netns\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741464 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-log-socket\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741507 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-host-run-ovn-kubernetes\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741529 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-host-cni-bin\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741539 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/465ae7e4-effc-4782-a230-fc76e3881d0e-ovnkube-config\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741553 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-log-socket\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741712 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-host-run-ovn-kubernetes\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741718 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/465ae7e4-effc-4782-a230-fc76e3881d0e-host-cni-bin\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741746 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed52af4-6aa4-49d9-8027-92e410e6d536-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.741900 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/465ae7e4-effc-4782-a230-fc76e3881d0e-ovnkube-script-lib\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.743632 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/465ae7e4-effc-4782-a230-fc76e3881d0e-ovn-node-metrics-cert\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.749267 4871 scope.go:117] "RemoveContainer" containerID="b570c2d25f996236a40499888a14fc297c9bef74066fafd34fe8a360d866ae10" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.761003 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twwkl\" (UniqueName: \"kubernetes.io/projected/465ae7e4-effc-4782-a230-fc76e3881d0e-kube-api-access-twwkl\") pod \"ovnkube-node-4rlrr\" (UID: \"465ae7e4-effc-4782-a230-fc76e3881d0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.904957 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.917968 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="178343c8-b657-4440-953e-6daef3609145" path="/var/lib/kubelet/pods/178343c8-b657-4440-953e-6daef3609145/volumes" Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.930239 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m7j2k"] Jan 28 15:30:22 crc kubenswrapper[4871]: I0128 15:30:22.938847 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m7j2k"] Jan 28 15:30:23 crc kubenswrapper[4871]: I0128 15:30:23.592394 4871 generic.go:334] "Generic (PLEG): container finished" podID="465ae7e4-effc-4782-a230-fc76e3881d0e" containerID="6533a522a23593a4b2f544ad98d757f3e02bcc85ff9a8d7ffcddcd09a26a343f" exitCode=0 Jan 28 15:30:23 crc kubenswrapper[4871]: I0128 15:30:23.592480 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" event={"ID":"465ae7e4-effc-4782-a230-fc76e3881d0e","Type":"ContainerDied","Data":"6533a522a23593a4b2f544ad98d757f3e02bcc85ff9a8d7ffcddcd09a26a343f"} Jan 28 15:30:23 crc kubenswrapper[4871]: I0128 15:30:23.592748 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" event={"ID":"465ae7e4-effc-4782-a230-fc76e3881d0e","Type":"ContainerStarted","Data":"f11d143aa57884201cb70725c99fae52675aa4b59b76b9cece64beff1e3e6d52"} Jan 28 15:30:24 crc kubenswrapper[4871]: I0128 15:30:24.615436 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" event={"ID":"465ae7e4-effc-4782-a230-fc76e3881d0e","Type":"ContainerStarted","Data":"54337c6047ead4ed83383712dcfa441b442b129a9d8e27a8331ad05e67f2aa90"} Jan 28 15:30:24 crc kubenswrapper[4871]: I0128 15:30:24.617115 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" event={"ID":"465ae7e4-effc-4782-a230-fc76e3881d0e","Type":"ContainerStarted","Data":"3643925b31e5a1ef1d2550a6ee7ee71731ab8fa1e548ee36657d0eba955c246d"} Jan 28 15:30:24 crc kubenswrapper[4871]: I0128 15:30:24.617311 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" event={"ID":"465ae7e4-effc-4782-a230-fc76e3881d0e","Type":"ContainerStarted","Data":"a2082abc85ed9545efaa4a5c9c5b92b30983b6a564c76f3bbaae7b89b4bad720"} Jan 28 15:30:24 crc kubenswrapper[4871]: I0128 15:30:24.617402 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" event={"ID":"465ae7e4-effc-4782-a230-fc76e3881d0e","Type":"ContainerStarted","Data":"c14d6d6f28318d1a2c1e73de2e872ac50576dcecb8ed619bac1f30fb8bccad04"} Jan 28 15:30:24 crc kubenswrapper[4871]: I0128 15:30:24.617483 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" event={"ID":"465ae7e4-effc-4782-a230-fc76e3881d0e","Type":"ContainerStarted","Data":"6385285c59151f70eaf15bc00e045f421fc5b5c6779bee77bf03c6c590ebee17"} Jan 28 15:30:24 crc kubenswrapper[4871]: I0128 15:30:24.617567 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" event={"ID":"465ae7e4-effc-4782-a230-fc76e3881d0e","Type":"ContainerStarted","Data":"ddb478b9393178403be9b2706da61d82c008da96af18bd16ef8c33040a866145"} Jan 28 15:30:24 crc kubenswrapper[4871]: I0128 15:30:24.923016 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ed52af4-6aa4-49d9-8027-92e410e6d536" path="/var/lib/kubelet/pods/2ed52af4-6aa4-49d9-8027-92e410e6d536/volumes" Jan 28 15:30:27 crc kubenswrapper[4871]: I0128 15:30:27.159254 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-fjhb6"] Jan 28 15:30:27 crc kubenswrapper[4871]: I0128 15:30:27.161015 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-fjhb6" Jan 28 15:30:27 crc kubenswrapper[4871]: I0128 15:30:27.163084 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 28 15:30:27 crc kubenswrapper[4871]: I0128 15:30:27.163366 4871 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-r8j5v" Jan 28 15:30:27 crc kubenswrapper[4871]: I0128 15:30:27.164163 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 28 15:30:27 crc kubenswrapper[4871]: I0128 15:30:27.164709 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 28 15:30:27 crc kubenswrapper[4871]: I0128 15:30:27.202513 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pngl\" (UniqueName: \"kubernetes.io/projected/392b6cab-a10c-4ee2-aae6-f67e83eb2657-kube-api-access-5pngl\") pod \"crc-storage-crc-fjhb6\" (UID: \"392b6cab-a10c-4ee2-aae6-f67e83eb2657\") " pod="crc-storage/crc-storage-crc-fjhb6" Jan 28 15:30:27 crc kubenswrapper[4871]: I0128 15:30:27.202664 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/392b6cab-a10c-4ee2-aae6-f67e83eb2657-crc-storage\") pod \"crc-storage-crc-fjhb6\" (UID: \"392b6cab-a10c-4ee2-aae6-f67e83eb2657\") " pod="crc-storage/crc-storage-crc-fjhb6" Jan 28 15:30:27 crc kubenswrapper[4871]: I0128 15:30:27.202759 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/392b6cab-a10c-4ee2-aae6-f67e83eb2657-node-mnt\") pod \"crc-storage-crc-fjhb6\" (UID: \"392b6cab-a10c-4ee2-aae6-f67e83eb2657\") " pod="crc-storage/crc-storage-crc-fjhb6" Jan 28 15:30:27 crc kubenswrapper[4871]: I0128 15:30:27.304442 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/392b6cab-a10c-4ee2-aae6-f67e83eb2657-crc-storage\") pod \"crc-storage-crc-fjhb6\" (UID: \"392b6cab-a10c-4ee2-aae6-f67e83eb2657\") " pod="crc-storage/crc-storage-crc-fjhb6" Jan 28 15:30:27 crc kubenswrapper[4871]: I0128 15:30:27.304543 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/392b6cab-a10c-4ee2-aae6-f67e83eb2657-node-mnt\") pod \"crc-storage-crc-fjhb6\" (UID: \"392b6cab-a10c-4ee2-aae6-f67e83eb2657\") " pod="crc-storage/crc-storage-crc-fjhb6" Jan 28 15:30:27 crc kubenswrapper[4871]: I0128 15:30:27.304758 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pngl\" (UniqueName: \"kubernetes.io/projected/392b6cab-a10c-4ee2-aae6-f67e83eb2657-kube-api-access-5pngl\") pod \"crc-storage-crc-fjhb6\" (UID: \"392b6cab-a10c-4ee2-aae6-f67e83eb2657\") " pod="crc-storage/crc-storage-crc-fjhb6" Jan 28 15:30:27 crc kubenswrapper[4871]: I0128 15:30:27.305167 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/392b6cab-a10c-4ee2-aae6-f67e83eb2657-node-mnt\") pod \"crc-storage-crc-fjhb6\" (UID: \"392b6cab-a10c-4ee2-aae6-f67e83eb2657\") " pod="crc-storage/crc-storage-crc-fjhb6" Jan 28 15:30:27 crc kubenswrapper[4871]: I0128 15:30:27.306006 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/392b6cab-a10c-4ee2-aae6-f67e83eb2657-crc-storage\") pod \"crc-storage-crc-fjhb6\" (UID: \"392b6cab-a10c-4ee2-aae6-f67e83eb2657\") " pod="crc-storage/crc-storage-crc-fjhb6" Jan 28 15:30:27 crc kubenswrapper[4871]: I0128 15:30:27.331062 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pngl\" (UniqueName: \"kubernetes.io/projected/392b6cab-a10c-4ee2-aae6-f67e83eb2657-kube-api-access-5pngl\") pod \"crc-storage-crc-fjhb6\" (UID: \"392b6cab-a10c-4ee2-aae6-f67e83eb2657\") " pod="crc-storage/crc-storage-crc-fjhb6" Jan 28 15:30:27 crc kubenswrapper[4871]: I0128 15:30:27.482836 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-fjhb6" Jan 28 15:30:27 crc kubenswrapper[4871]: E0128 15:30:27.504719 4871 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-fjhb6_crc-storage_392b6cab-a10c-4ee2-aae6-f67e83eb2657_0(a6357caaf3bfcd4ff3bb79c846e352de6f00300e8de331937ede38cc6041d4f7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:30:27 crc kubenswrapper[4871]: E0128 15:30:27.504813 4871 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-fjhb6_crc-storage_392b6cab-a10c-4ee2-aae6-f67e83eb2657_0(a6357caaf3bfcd4ff3bb79c846e352de6f00300e8de331937ede38cc6041d4f7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-fjhb6" Jan 28 15:30:27 crc kubenswrapper[4871]: E0128 15:30:27.504839 4871 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-fjhb6_crc-storage_392b6cab-a10c-4ee2-aae6-f67e83eb2657_0(a6357caaf3bfcd4ff3bb79c846e352de6f00300e8de331937ede38cc6041d4f7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-fjhb6" Jan 28 15:30:27 crc kubenswrapper[4871]: E0128 15:30:27.504906 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-fjhb6_crc-storage(392b6cab-a10c-4ee2-aae6-f67e83eb2657)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-fjhb6_crc-storage(392b6cab-a10c-4ee2-aae6-f67e83eb2657)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-fjhb6_crc-storage_392b6cab-a10c-4ee2-aae6-f67e83eb2657_0(a6357caaf3bfcd4ff3bb79c846e352de6f00300e8de331937ede38cc6041d4f7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-fjhb6" podUID="392b6cab-a10c-4ee2-aae6-f67e83eb2657" Jan 28 15:30:27 crc kubenswrapper[4871]: I0128 15:30:27.639410 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" event={"ID":"465ae7e4-effc-4782-a230-fc76e3881d0e","Type":"ContainerStarted","Data":"009754eab770b305fed361ee19b368fd430896855b991958c963b3fd23d15b4f"} Jan 28 15:30:29 crc kubenswrapper[4871]: I0128 15:30:29.656067 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" event={"ID":"465ae7e4-effc-4782-a230-fc76e3881d0e","Type":"ContainerStarted","Data":"0df9675fb0f7e727c797b3a9a6abf0b0bb9d3a5a3c947a50a1a5fc3722766d44"} Jan 28 15:30:29 crc kubenswrapper[4871]: I0128 15:30:29.656658 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:29 crc kubenswrapper[4871]: I0128 15:30:29.656716 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:29 crc kubenswrapper[4871]: I0128 15:30:29.697168 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" podStartSLOduration=7.697143846 podStartE2EDuration="7.697143846s" podCreationTimestamp="2026-01-28 15:30:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:30:29.695965459 +0000 UTC m=+781.591803811" watchObservedRunningTime="2026-01-28 15:30:29.697143846 +0000 UTC m=+781.592982178" Jan 28 15:30:29 crc kubenswrapper[4871]: I0128 15:30:29.701052 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:30 crc kubenswrapper[4871]: I0128 15:30:30.248096 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-fjhb6"] Jan 28 15:30:30 crc kubenswrapper[4871]: I0128 15:30:30.248237 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-fjhb6" Jan 28 15:30:30 crc kubenswrapper[4871]: I0128 15:30:30.248704 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-fjhb6" Jan 28 15:30:30 crc kubenswrapper[4871]: E0128 15:30:30.276187 4871 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-fjhb6_crc-storage_392b6cab-a10c-4ee2-aae6-f67e83eb2657_0(2d2a825f195802a28a21b2292285f9e987cb00f0abfdb5634ade661c4fb15727): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:30:30 crc kubenswrapper[4871]: E0128 15:30:30.276267 4871 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-fjhb6_crc-storage_392b6cab-a10c-4ee2-aae6-f67e83eb2657_0(2d2a825f195802a28a21b2292285f9e987cb00f0abfdb5634ade661c4fb15727): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-fjhb6" Jan 28 15:30:30 crc kubenswrapper[4871]: E0128 15:30:30.276299 4871 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-fjhb6_crc-storage_392b6cab-a10c-4ee2-aae6-f67e83eb2657_0(2d2a825f195802a28a21b2292285f9e987cb00f0abfdb5634ade661c4fb15727): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-fjhb6" Jan 28 15:30:30 crc kubenswrapper[4871]: E0128 15:30:30.276370 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-fjhb6_crc-storage(392b6cab-a10c-4ee2-aae6-f67e83eb2657)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-fjhb6_crc-storage(392b6cab-a10c-4ee2-aae6-f67e83eb2657)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-fjhb6_crc-storage_392b6cab-a10c-4ee2-aae6-f67e83eb2657_0(2d2a825f195802a28a21b2292285f9e987cb00f0abfdb5634ade661c4fb15727): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-fjhb6" podUID="392b6cab-a10c-4ee2-aae6-f67e83eb2657" Jan 28 15:30:30 crc kubenswrapper[4871]: I0128 15:30:30.664103 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:30 crc kubenswrapper[4871]: I0128 15:30:30.701529 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:42 crc kubenswrapper[4871]: I0128 15:30:42.903297 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-fjhb6" Jan 28 15:30:42 crc kubenswrapper[4871]: I0128 15:30:42.904646 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-fjhb6" Jan 28 15:30:43 crc kubenswrapper[4871]: I0128 15:30:43.338767 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-fjhb6"] Jan 28 15:30:43 crc kubenswrapper[4871]: W0128 15:30:43.349107 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod392b6cab_a10c_4ee2_aae6_f67e83eb2657.slice/crio-f100127d28f46cad7edc57ace072b5d50728224013015e07204b6d16994ef067 WatchSource:0}: Error finding container f100127d28f46cad7edc57ace072b5d50728224013015e07204b6d16994ef067: Status 404 returned error can't find the container with id f100127d28f46cad7edc57ace072b5d50728224013015e07204b6d16994ef067 Jan 28 15:30:43 crc kubenswrapper[4871]: I0128 15:30:43.741997 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-fjhb6" event={"ID":"392b6cab-a10c-4ee2-aae6-f67e83eb2657","Type":"ContainerStarted","Data":"f100127d28f46cad7edc57ace072b5d50728224013015e07204b6d16994ef067"} Jan 28 15:30:43 crc kubenswrapper[4871]: I0128 15:30:43.814110 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:30:43 crc kubenswrapper[4871]: I0128 15:30:43.814217 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:30:45 crc kubenswrapper[4871]: I0128 15:30:45.752443 4871 generic.go:334] "Generic (PLEG): container finished" podID="392b6cab-a10c-4ee2-aae6-f67e83eb2657" containerID="ed35ec26bde0be5b6d35592b3733a1cf546ba10f4f9349ed6cdf96d23d938d73" exitCode=0 Jan 28 15:30:45 crc kubenswrapper[4871]: I0128 15:30:45.752522 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-fjhb6" event={"ID":"392b6cab-a10c-4ee2-aae6-f67e83eb2657","Type":"ContainerDied","Data":"ed35ec26bde0be5b6d35592b3733a1cf546ba10f4f9349ed6cdf96d23d938d73"} Jan 28 15:30:47 crc kubenswrapper[4871]: I0128 15:30:47.008346 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-fjhb6" Jan 28 15:30:47 crc kubenswrapper[4871]: I0128 15:30:47.071210 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pngl\" (UniqueName: \"kubernetes.io/projected/392b6cab-a10c-4ee2-aae6-f67e83eb2657-kube-api-access-5pngl\") pod \"392b6cab-a10c-4ee2-aae6-f67e83eb2657\" (UID: \"392b6cab-a10c-4ee2-aae6-f67e83eb2657\") " Jan 28 15:30:47 crc kubenswrapper[4871]: I0128 15:30:47.071355 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/392b6cab-a10c-4ee2-aae6-f67e83eb2657-crc-storage\") pod \"392b6cab-a10c-4ee2-aae6-f67e83eb2657\" (UID: \"392b6cab-a10c-4ee2-aae6-f67e83eb2657\") " Jan 28 15:30:47 crc kubenswrapper[4871]: I0128 15:30:47.071388 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/392b6cab-a10c-4ee2-aae6-f67e83eb2657-node-mnt\") pod \"392b6cab-a10c-4ee2-aae6-f67e83eb2657\" (UID: \"392b6cab-a10c-4ee2-aae6-f67e83eb2657\") " Jan 28 15:30:47 crc kubenswrapper[4871]: I0128 15:30:47.071712 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/392b6cab-a10c-4ee2-aae6-f67e83eb2657-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "392b6cab-a10c-4ee2-aae6-f67e83eb2657" (UID: "392b6cab-a10c-4ee2-aae6-f67e83eb2657"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:47 crc kubenswrapper[4871]: I0128 15:30:47.078772 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/392b6cab-a10c-4ee2-aae6-f67e83eb2657-kube-api-access-5pngl" (OuterVolumeSpecName: "kube-api-access-5pngl") pod "392b6cab-a10c-4ee2-aae6-f67e83eb2657" (UID: "392b6cab-a10c-4ee2-aae6-f67e83eb2657"). InnerVolumeSpecName "kube-api-access-5pngl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:30:47 crc kubenswrapper[4871]: I0128 15:30:47.083989 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/392b6cab-a10c-4ee2-aae6-f67e83eb2657-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "392b6cab-a10c-4ee2-aae6-f67e83eb2657" (UID: "392b6cab-a10c-4ee2-aae6-f67e83eb2657"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:30:47 crc kubenswrapper[4871]: I0128 15:30:47.173154 4871 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/392b6cab-a10c-4ee2-aae6-f67e83eb2657-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:47 crc kubenswrapper[4871]: I0128 15:30:47.173185 4871 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/392b6cab-a10c-4ee2-aae6-f67e83eb2657-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:47 crc kubenswrapper[4871]: I0128 15:30:47.173194 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pngl\" (UniqueName: \"kubernetes.io/projected/392b6cab-a10c-4ee2-aae6-f67e83eb2657-kube-api-access-5pngl\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:47 crc kubenswrapper[4871]: I0128 15:30:47.768718 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-fjhb6" event={"ID":"392b6cab-a10c-4ee2-aae6-f67e83eb2657","Type":"ContainerDied","Data":"f100127d28f46cad7edc57ace072b5d50728224013015e07204b6d16994ef067"} Jan 28 15:30:47 crc kubenswrapper[4871]: I0128 15:30:47.768766 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f100127d28f46cad7edc57ace072b5d50728224013015e07204b6d16994ef067" Jan 28 15:30:47 crc kubenswrapper[4871]: I0128 15:30:47.768787 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-fjhb6" Jan 28 15:30:52 crc kubenswrapper[4871]: I0128 15:30:52.932864 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4rlrr" Jan 28 15:30:54 crc kubenswrapper[4871]: I0128 15:30:54.714370 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b"] Jan 28 15:30:54 crc kubenswrapper[4871]: E0128 15:30:54.714603 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392b6cab-a10c-4ee2-aae6-f67e83eb2657" containerName="storage" Jan 28 15:30:54 crc kubenswrapper[4871]: I0128 15:30:54.714617 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="392b6cab-a10c-4ee2-aae6-f67e83eb2657" containerName="storage" Jan 28 15:30:54 crc kubenswrapper[4871]: I0128 15:30:54.714715 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="392b6cab-a10c-4ee2-aae6-f67e83eb2657" containerName="storage" Jan 28 15:30:54 crc kubenswrapper[4871]: I0128 15:30:54.715400 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b" Jan 28 15:30:54 crc kubenswrapper[4871]: I0128 15:30:54.716990 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 15:30:54 crc kubenswrapper[4871]: I0128 15:30:54.725512 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b"] Jan 28 15:30:54 crc kubenswrapper[4871]: I0128 15:30:54.767292 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9516ba5d-c370-480d-8ab7-5e90e188fc9b-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b\" (UID: \"9516ba5d-c370-480d-8ab7-5e90e188fc9b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b" Jan 28 15:30:54 crc kubenswrapper[4871]: I0128 15:30:54.767334 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9516ba5d-c370-480d-8ab7-5e90e188fc9b-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b\" (UID: \"9516ba5d-c370-480d-8ab7-5e90e188fc9b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b" Jan 28 15:30:54 crc kubenswrapper[4871]: I0128 15:30:54.767353 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7pk8\" (UniqueName: \"kubernetes.io/projected/9516ba5d-c370-480d-8ab7-5e90e188fc9b-kube-api-access-l7pk8\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b\" (UID: \"9516ba5d-c370-480d-8ab7-5e90e188fc9b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b" Jan 28 15:30:54 crc kubenswrapper[4871]: I0128 15:30:54.868750 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9516ba5d-c370-480d-8ab7-5e90e188fc9b-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b\" (UID: \"9516ba5d-c370-480d-8ab7-5e90e188fc9b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b" Jan 28 15:30:54 crc kubenswrapper[4871]: I0128 15:30:54.868832 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9516ba5d-c370-480d-8ab7-5e90e188fc9b-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b\" (UID: \"9516ba5d-c370-480d-8ab7-5e90e188fc9b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b" Jan 28 15:30:54 crc kubenswrapper[4871]: I0128 15:30:54.868864 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7pk8\" (UniqueName: \"kubernetes.io/projected/9516ba5d-c370-480d-8ab7-5e90e188fc9b-kube-api-access-l7pk8\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b\" (UID: \"9516ba5d-c370-480d-8ab7-5e90e188fc9b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b" Jan 28 15:30:54 crc kubenswrapper[4871]: I0128 15:30:54.869548 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9516ba5d-c370-480d-8ab7-5e90e188fc9b-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b\" (UID: \"9516ba5d-c370-480d-8ab7-5e90e188fc9b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b" Jan 28 15:30:54 crc kubenswrapper[4871]: I0128 15:30:54.869569 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9516ba5d-c370-480d-8ab7-5e90e188fc9b-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b\" (UID: \"9516ba5d-c370-480d-8ab7-5e90e188fc9b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b" Jan 28 15:30:54 crc kubenswrapper[4871]: I0128 15:30:54.897625 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7pk8\" (UniqueName: \"kubernetes.io/projected/9516ba5d-c370-480d-8ab7-5e90e188fc9b-kube-api-access-l7pk8\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b\" (UID: \"9516ba5d-c370-480d-8ab7-5e90e188fc9b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b" Jan 28 15:30:55 crc kubenswrapper[4871]: I0128 15:30:55.033324 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b" Jan 28 15:30:55 crc kubenswrapper[4871]: I0128 15:30:55.249152 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b"] Jan 28 15:30:55 crc kubenswrapper[4871]: I0128 15:30:55.818991 4871 generic.go:334] "Generic (PLEG): container finished" podID="9516ba5d-c370-480d-8ab7-5e90e188fc9b" containerID="f471f358170357bc4b3af04e51eabfc65f48e8fbc2f1fba00ca57e652837cd8b" exitCode=0 Jan 28 15:30:55 crc kubenswrapper[4871]: I0128 15:30:55.819033 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b" event={"ID":"9516ba5d-c370-480d-8ab7-5e90e188fc9b","Type":"ContainerDied","Data":"f471f358170357bc4b3af04e51eabfc65f48e8fbc2f1fba00ca57e652837cd8b"} Jan 28 15:30:55 crc kubenswrapper[4871]: I0128 15:30:55.819062 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b" event={"ID":"9516ba5d-c370-480d-8ab7-5e90e188fc9b","Type":"ContainerStarted","Data":"cbd20c39cb36d800375c8cae562b4532619c556eab0662c0091f39443db33759"} Jan 28 15:31:01 crc kubenswrapper[4871]: I0128 15:31:01.859995 4871 generic.go:334] "Generic (PLEG): container finished" podID="9516ba5d-c370-480d-8ab7-5e90e188fc9b" containerID="7a63390a86c8ea689e7b53cb3dba7a58c31290b9ec18e43e2350b83c32a08f8b" exitCode=0 Jan 28 15:31:01 crc kubenswrapper[4871]: I0128 15:31:01.860068 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b" event={"ID":"9516ba5d-c370-480d-8ab7-5e90e188fc9b","Type":"ContainerDied","Data":"7a63390a86c8ea689e7b53cb3dba7a58c31290b9ec18e43e2350b83c32a08f8b"} Jan 28 15:31:02 crc kubenswrapper[4871]: I0128 15:31:02.866578 4871 generic.go:334] "Generic (PLEG): container finished" podID="9516ba5d-c370-480d-8ab7-5e90e188fc9b" containerID="14b24d86842076407a120b9d5616970e62e572d45bf39fa472e12ae09a7f9ecd" exitCode=0 Jan 28 15:31:02 crc kubenswrapper[4871]: I0128 15:31:02.866716 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b" event={"ID":"9516ba5d-c370-480d-8ab7-5e90e188fc9b","Type":"ContainerDied","Data":"14b24d86842076407a120b9d5616970e62e572d45bf39fa472e12ae09a7f9ecd"} Jan 28 15:31:04 crc kubenswrapper[4871]: I0128 15:31:04.198318 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b" Jan 28 15:31:04 crc kubenswrapper[4871]: I0128 15:31:04.297688 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9516ba5d-c370-480d-8ab7-5e90e188fc9b-bundle\") pod \"9516ba5d-c370-480d-8ab7-5e90e188fc9b\" (UID: \"9516ba5d-c370-480d-8ab7-5e90e188fc9b\") " Jan 28 15:31:04 crc kubenswrapper[4871]: I0128 15:31:04.297805 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7pk8\" (UniqueName: \"kubernetes.io/projected/9516ba5d-c370-480d-8ab7-5e90e188fc9b-kube-api-access-l7pk8\") pod \"9516ba5d-c370-480d-8ab7-5e90e188fc9b\" (UID: \"9516ba5d-c370-480d-8ab7-5e90e188fc9b\") " Jan 28 15:31:04 crc kubenswrapper[4871]: I0128 15:31:04.297834 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9516ba5d-c370-480d-8ab7-5e90e188fc9b-util\") pod \"9516ba5d-c370-480d-8ab7-5e90e188fc9b\" (UID: \"9516ba5d-c370-480d-8ab7-5e90e188fc9b\") " Jan 28 15:31:04 crc kubenswrapper[4871]: I0128 15:31:04.299176 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9516ba5d-c370-480d-8ab7-5e90e188fc9b-bundle" (OuterVolumeSpecName: "bundle") pod "9516ba5d-c370-480d-8ab7-5e90e188fc9b" (UID: "9516ba5d-c370-480d-8ab7-5e90e188fc9b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:31:04 crc kubenswrapper[4871]: I0128 15:31:04.306708 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9516ba5d-c370-480d-8ab7-5e90e188fc9b-kube-api-access-l7pk8" (OuterVolumeSpecName: "kube-api-access-l7pk8") pod "9516ba5d-c370-480d-8ab7-5e90e188fc9b" (UID: "9516ba5d-c370-480d-8ab7-5e90e188fc9b"). InnerVolumeSpecName "kube-api-access-l7pk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:04 crc kubenswrapper[4871]: I0128 15:31:04.321922 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9516ba5d-c370-480d-8ab7-5e90e188fc9b-util" (OuterVolumeSpecName: "util") pod "9516ba5d-c370-480d-8ab7-5e90e188fc9b" (UID: "9516ba5d-c370-480d-8ab7-5e90e188fc9b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:31:04 crc kubenswrapper[4871]: I0128 15:31:04.399397 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7pk8\" (UniqueName: \"kubernetes.io/projected/9516ba5d-c370-480d-8ab7-5e90e188fc9b-kube-api-access-l7pk8\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:04 crc kubenswrapper[4871]: I0128 15:31:04.399473 4871 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9516ba5d-c370-480d-8ab7-5e90e188fc9b-util\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:04 crc kubenswrapper[4871]: I0128 15:31:04.399503 4871 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9516ba5d-c370-480d-8ab7-5e90e188fc9b-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:04 crc kubenswrapper[4871]: I0128 15:31:04.883892 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b" event={"ID":"9516ba5d-c370-480d-8ab7-5e90e188fc9b","Type":"ContainerDied","Data":"cbd20c39cb36d800375c8cae562b4532619c556eab0662c0091f39443db33759"} Jan 28 15:31:04 crc kubenswrapper[4871]: I0128 15:31:04.883939 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbd20c39cb36d800375c8cae562b4532619c556eab0662c0091f39443db33759" Jan 28 15:31:04 crc kubenswrapper[4871]: I0128 15:31:04.884024 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b" Jan 28 15:31:11 crc kubenswrapper[4871]: I0128 15:31:11.384342 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-wp8qk"] Jan 28 15:31:11 crc kubenswrapper[4871]: E0128 15:31:11.385874 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9516ba5d-c370-480d-8ab7-5e90e188fc9b" containerName="pull" Jan 28 15:31:11 crc kubenswrapper[4871]: I0128 15:31:11.385964 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="9516ba5d-c370-480d-8ab7-5e90e188fc9b" containerName="pull" Jan 28 15:31:11 crc kubenswrapper[4871]: E0128 15:31:11.386018 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9516ba5d-c370-480d-8ab7-5e90e188fc9b" containerName="util" Jan 28 15:31:11 crc kubenswrapper[4871]: I0128 15:31:11.386072 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="9516ba5d-c370-480d-8ab7-5e90e188fc9b" containerName="util" Jan 28 15:31:11 crc kubenswrapper[4871]: E0128 15:31:11.386142 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9516ba5d-c370-480d-8ab7-5e90e188fc9b" containerName="extract" Jan 28 15:31:11 crc kubenswrapper[4871]: I0128 15:31:11.386192 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="9516ba5d-c370-480d-8ab7-5e90e188fc9b" containerName="extract" Jan 28 15:31:11 crc kubenswrapper[4871]: I0128 15:31:11.386525 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="9516ba5d-c370-480d-8ab7-5e90e188fc9b" containerName="extract" Jan 28 15:31:11 crc kubenswrapper[4871]: I0128 15:31:11.386966 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-wp8qk" Jan 28 15:31:11 crc kubenswrapper[4871]: I0128 15:31:11.389622 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 28 15:31:11 crc kubenswrapper[4871]: I0128 15:31:11.389645 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 28 15:31:11 crc kubenswrapper[4871]: I0128 15:31:11.389622 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-6w48n" Jan 28 15:31:11 crc kubenswrapper[4871]: I0128 15:31:11.399670 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-wp8qk"] Jan 28 15:31:11 crc kubenswrapper[4871]: I0128 15:31:11.501076 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwjdt\" (UniqueName: \"kubernetes.io/projected/5b5280aa-3b45-4da9-9d71-9c2448f5aa6a-kube-api-access-xwjdt\") pod \"nmstate-operator-646758c888-wp8qk\" (UID: \"5b5280aa-3b45-4da9-9d71-9c2448f5aa6a\") " pod="openshift-nmstate/nmstate-operator-646758c888-wp8qk" Jan 28 15:31:11 crc kubenswrapper[4871]: I0128 15:31:11.602095 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwjdt\" (UniqueName: \"kubernetes.io/projected/5b5280aa-3b45-4da9-9d71-9c2448f5aa6a-kube-api-access-xwjdt\") pod \"nmstate-operator-646758c888-wp8qk\" (UID: \"5b5280aa-3b45-4da9-9d71-9c2448f5aa6a\") " pod="openshift-nmstate/nmstate-operator-646758c888-wp8qk" Jan 28 15:31:11 crc kubenswrapper[4871]: I0128 15:31:11.618521 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwjdt\" (UniqueName: \"kubernetes.io/projected/5b5280aa-3b45-4da9-9d71-9c2448f5aa6a-kube-api-access-xwjdt\") pod \"nmstate-operator-646758c888-wp8qk\" (UID: \"5b5280aa-3b45-4da9-9d71-9c2448f5aa6a\") " pod="openshift-nmstate/nmstate-operator-646758c888-wp8qk" Jan 28 15:31:11 crc kubenswrapper[4871]: I0128 15:31:11.703954 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-wp8qk" Jan 28 15:31:12 crc kubenswrapper[4871]: I0128 15:31:12.097282 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-wp8qk"] Jan 28 15:31:12 crc kubenswrapper[4871]: I0128 15:31:12.933872 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-wp8qk" event={"ID":"5b5280aa-3b45-4da9-9d71-9c2448f5aa6a","Type":"ContainerStarted","Data":"45624a28edeb0f649fb143527439d19ce9cfb2e4e5597154cd89c5a8b34bbd81"} Jan 28 15:31:13 crc kubenswrapper[4871]: I0128 15:31:13.814194 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:31:13 crc kubenswrapper[4871]: I0128 15:31:13.814639 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:31:15 crc kubenswrapper[4871]: I0128 15:31:15.957095 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-wp8qk" event={"ID":"5b5280aa-3b45-4da9-9d71-9c2448f5aa6a","Type":"ContainerStarted","Data":"20a65a4c994d008ca20312f0f82e3fd50645c2b391bfde80e854ab3d0454c745"} Jan 28 15:31:15 crc kubenswrapper[4871]: I0128 15:31:15.977510 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-wp8qk" podStartSLOduration=1.683366045 podStartE2EDuration="4.977377583s" podCreationTimestamp="2026-01-28 15:31:11 +0000 UTC" firstStartedPulling="2026-01-28 15:31:12.104024461 +0000 UTC m=+823.999862793" lastFinishedPulling="2026-01-28 15:31:15.398036009 +0000 UTC m=+827.293874331" observedRunningTime="2026-01-28 15:31:15.976773664 +0000 UTC m=+827.872612006" watchObservedRunningTime="2026-01-28 15:31:15.977377583 +0000 UTC m=+827.873215905" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.801700 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-k4ztv"] Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.802821 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-k4ztv" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.805766 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-mblwl" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.810993 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-k4ztv"] Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.821272 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-6lx5n"] Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.822016 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6lx5n" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.824120 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.868150 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-62t6z"] Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.869119 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-62t6z" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.869666 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-6lx5n"] Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.889388 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0d70e50a-fc4d-468c-a990-b84318b6db7d-nmstate-lock\") pod \"nmstate-handler-62t6z\" (UID: \"0d70e50a-fc4d-468c-a990-b84318b6db7d\") " pod="openshift-nmstate/nmstate-handler-62t6z" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.889429 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0d70e50a-fc4d-468c-a990-b84318b6db7d-ovs-socket\") pod \"nmstate-handler-62t6z\" (UID: \"0d70e50a-fc4d-468c-a990-b84318b6db7d\") " pod="openshift-nmstate/nmstate-handler-62t6z" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.889454 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb9zf\" (UniqueName: \"kubernetes.io/projected/3bd61983-6d96-440d-969c-6e70160a269e-kube-api-access-jb9zf\") pod \"nmstate-webhook-8474b5b9d8-6lx5n\" (UID: \"3bd61983-6d96-440d-969c-6e70160a269e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6lx5n" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.889471 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0d70e50a-fc4d-468c-a990-b84318b6db7d-dbus-socket\") pod \"nmstate-handler-62t6z\" (UID: \"0d70e50a-fc4d-468c-a990-b84318b6db7d\") " pod="openshift-nmstate/nmstate-handler-62t6z" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.889494 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3bd61983-6d96-440d-969c-6e70160a269e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-6lx5n\" (UID: \"3bd61983-6d96-440d-969c-6e70160a269e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6lx5n" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.889511 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slz5j\" (UniqueName: \"kubernetes.io/projected/adecd4d5-4d6e-42e6-b6a1-25924a745bf4-kube-api-access-slz5j\") pod \"nmstate-metrics-54757c584b-k4ztv\" (UID: \"adecd4d5-4d6e-42e6-b6a1-25924a745bf4\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-k4ztv" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.889536 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5svx\" (UniqueName: \"kubernetes.io/projected/0d70e50a-fc4d-468c-a990-b84318b6db7d-kube-api-access-l5svx\") pod \"nmstate-handler-62t6z\" (UID: \"0d70e50a-fc4d-468c-a990-b84318b6db7d\") " pod="openshift-nmstate/nmstate-handler-62t6z" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.955269 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-zwd77"] Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.955916 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zwd77" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.958037 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.958192 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.958524 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-hwscj" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.973644 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-zwd77"] Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.990322 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3bd61983-6d96-440d-969c-6e70160a269e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-6lx5n\" (UID: \"3bd61983-6d96-440d-969c-6e70160a269e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6lx5n" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.990366 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slz5j\" (UniqueName: \"kubernetes.io/projected/adecd4d5-4d6e-42e6-b6a1-25924a745bf4-kube-api-access-slz5j\") pod \"nmstate-metrics-54757c584b-k4ztv\" (UID: \"adecd4d5-4d6e-42e6-b6a1-25924a745bf4\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-k4ztv" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.990406 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5svx\" (UniqueName: \"kubernetes.io/projected/0d70e50a-fc4d-468c-a990-b84318b6db7d-kube-api-access-l5svx\") pod \"nmstate-handler-62t6z\" (UID: \"0d70e50a-fc4d-468c-a990-b84318b6db7d\") " pod="openshift-nmstate/nmstate-handler-62t6z" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.991204 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0d70e50a-fc4d-468c-a990-b84318b6db7d-nmstate-lock\") pod \"nmstate-handler-62t6z\" (UID: \"0d70e50a-fc4d-468c-a990-b84318b6db7d\") " pod="openshift-nmstate/nmstate-handler-62t6z" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.991250 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0d70e50a-fc4d-468c-a990-b84318b6db7d-ovs-socket\") pod \"nmstate-handler-62t6z\" (UID: \"0d70e50a-fc4d-468c-a990-b84318b6db7d\") " pod="openshift-nmstate/nmstate-handler-62t6z" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.991304 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb9zf\" (UniqueName: \"kubernetes.io/projected/3bd61983-6d96-440d-969c-6e70160a269e-kube-api-access-jb9zf\") pod \"nmstate-webhook-8474b5b9d8-6lx5n\" (UID: \"3bd61983-6d96-440d-969c-6e70160a269e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6lx5n" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.991327 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0d70e50a-fc4d-468c-a990-b84318b6db7d-dbus-socket\") pod \"nmstate-handler-62t6z\" (UID: \"0d70e50a-fc4d-468c-a990-b84318b6db7d\") " pod="openshift-nmstate/nmstate-handler-62t6z" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.992004 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0d70e50a-fc4d-468c-a990-b84318b6db7d-ovs-socket\") pod \"nmstate-handler-62t6z\" (UID: \"0d70e50a-fc4d-468c-a990-b84318b6db7d\") " pod="openshift-nmstate/nmstate-handler-62t6z" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.992053 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0d70e50a-fc4d-468c-a990-b84318b6db7d-nmstate-lock\") pod \"nmstate-handler-62t6z\" (UID: \"0d70e50a-fc4d-468c-a990-b84318b6db7d\") " pod="openshift-nmstate/nmstate-handler-62t6z" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.992492 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0d70e50a-fc4d-468c-a990-b84318b6db7d-dbus-socket\") pod \"nmstate-handler-62t6z\" (UID: \"0d70e50a-fc4d-468c-a990-b84318b6db7d\") " pod="openshift-nmstate/nmstate-handler-62t6z" Jan 28 15:31:16 crc kubenswrapper[4871]: I0128 15:31:16.996365 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3bd61983-6d96-440d-969c-6e70160a269e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-6lx5n\" (UID: \"3bd61983-6d96-440d-969c-6e70160a269e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6lx5n" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.008403 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5svx\" (UniqueName: \"kubernetes.io/projected/0d70e50a-fc4d-468c-a990-b84318b6db7d-kube-api-access-l5svx\") pod \"nmstate-handler-62t6z\" (UID: \"0d70e50a-fc4d-468c-a990-b84318b6db7d\") " pod="openshift-nmstate/nmstate-handler-62t6z" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.018878 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb9zf\" (UniqueName: \"kubernetes.io/projected/3bd61983-6d96-440d-969c-6e70160a269e-kube-api-access-jb9zf\") pod \"nmstate-webhook-8474b5b9d8-6lx5n\" (UID: \"3bd61983-6d96-440d-969c-6e70160a269e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6lx5n" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.021100 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slz5j\" (UniqueName: \"kubernetes.io/projected/adecd4d5-4d6e-42e6-b6a1-25924a745bf4-kube-api-access-slz5j\") pod \"nmstate-metrics-54757c584b-k4ztv\" (UID: \"adecd4d5-4d6e-42e6-b6a1-25924a745bf4\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-k4ztv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.092669 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ea02be43-3af2-4dbc-81f9-f456805b9b8d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-zwd77\" (UID: \"ea02be43-3af2-4dbc-81f9-f456805b9b8d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zwd77" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.093942 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea02be43-3af2-4dbc-81f9-f456805b9b8d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-zwd77\" (UID: \"ea02be43-3af2-4dbc-81f9-f456805b9b8d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zwd77" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.094060 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jrfw\" (UniqueName: \"kubernetes.io/projected/ea02be43-3af2-4dbc-81f9-f456805b9b8d-kube-api-access-6jrfw\") pod \"nmstate-console-plugin-7754f76f8b-zwd77\" (UID: \"ea02be43-3af2-4dbc-81f9-f456805b9b8d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zwd77" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.119886 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-k4ztv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.135484 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6588db564f-xzplv"] Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.136944 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.145894 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6lx5n" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.154223 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6588db564f-xzplv"] Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.185852 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-62t6z" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.195017 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w99h2\" (UniqueName: \"kubernetes.io/projected/cca9a166-2193-469f-a9f8-e9ca8fa36f00-kube-api-access-w99h2\") pod \"console-6588db564f-xzplv\" (UID: \"cca9a166-2193-469f-a9f8-e9ca8fa36f00\") " pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.195075 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jrfw\" (UniqueName: \"kubernetes.io/projected/ea02be43-3af2-4dbc-81f9-f456805b9b8d-kube-api-access-6jrfw\") pod \"nmstate-console-plugin-7754f76f8b-zwd77\" (UID: \"ea02be43-3af2-4dbc-81f9-f456805b9b8d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zwd77" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.195113 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cca9a166-2193-469f-a9f8-e9ca8fa36f00-console-config\") pod \"console-6588db564f-xzplv\" (UID: \"cca9a166-2193-469f-a9f8-e9ca8fa36f00\") " pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.195160 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cca9a166-2193-469f-a9f8-e9ca8fa36f00-service-ca\") pod \"console-6588db564f-xzplv\" (UID: \"cca9a166-2193-469f-a9f8-e9ca8fa36f00\") " pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.195183 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cca9a166-2193-469f-a9f8-e9ca8fa36f00-trusted-ca-bundle\") pod \"console-6588db564f-xzplv\" (UID: \"cca9a166-2193-469f-a9f8-e9ca8fa36f00\") " pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.195221 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cca9a166-2193-469f-a9f8-e9ca8fa36f00-console-oauth-config\") pod \"console-6588db564f-xzplv\" (UID: \"cca9a166-2193-469f-a9f8-e9ca8fa36f00\") " pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.195294 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ea02be43-3af2-4dbc-81f9-f456805b9b8d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-zwd77\" (UID: \"ea02be43-3af2-4dbc-81f9-f456805b9b8d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zwd77" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.195323 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cca9a166-2193-469f-a9f8-e9ca8fa36f00-oauth-serving-cert\") pod \"console-6588db564f-xzplv\" (UID: \"cca9a166-2193-469f-a9f8-e9ca8fa36f00\") " pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.195357 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cca9a166-2193-469f-a9f8-e9ca8fa36f00-console-serving-cert\") pod \"console-6588db564f-xzplv\" (UID: \"cca9a166-2193-469f-a9f8-e9ca8fa36f00\") " pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.195416 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea02be43-3af2-4dbc-81f9-f456805b9b8d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-zwd77\" (UID: \"ea02be43-3af2-4dbc-81f9-f456805b9b8d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zwd77" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.196145 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ea02be43-3af2-4dbc-81f9-f456805b9b8d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-zwd77\" (UID: \"ea02be43-3af2-4dbc-81f9-f456805b9b8d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zwd77" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.200044 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea02be43-3af2-4dbc-81f9-f456805b9b8d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-zwd77\" (UID: \"ea02be43-3af2-4dbc-81f9-f456805b9b8d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zwd77" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.210657 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jrfw\" (UniqueName: \"kubernetes.io/projected/ea02be43-3af2-4dbc-81f9-f456805b9b8d-kube-api-access-6jrfw\") pod \"nmstate-console-plugin-7754f76f8b-zwd77\" (UID: \"ea02be43-3af2-4dbc-81f9-f456805b9b8d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zwd77" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.273229 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zwd77" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.296617 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cca9a166-2193-469f-a9f8-e9ca8fa36f00-console-serving-cert\") pod \"console-6588db564f-xzplv\" (UID: \"cca9a166-2193-469f-a9f8-e9ca8fa36f00\") " pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.296699 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w99h2\" (UniqueName: \"kubernetes.io/projected/cca9a166-2193-469f-a9f8-e9ca8fa36f00-kube-api-access-w99h2\") pod \"console-6588db564f-xzplv\" (UID: \"cca9a166-2193-469f-a9f8-e9ca8fa36f00\") " pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.296730 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cca9a166-2193-469f-a9f8-e9ca8fa36f00-console-config\") pod \"console-6588db564f-xzplv\" (UID: \"cca9a166-2193-469f-a9f8-e9ca8fa36f00\") " pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.296768 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cca9a166-2193-469f-a9f8-e9ca8fa36f00-service-ca\") pod \"console-6588db564f-xzplv\" (UID: \"cca9a166-2193-469f-a9f8-e9ca8fa36f00\") " pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.296797 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cca9a166-2193-469f-a9f8-e9ca8fa36f00-trusted-ca-bundle\") pod \"console-6588db564f-xzplv\" (UID: \"cca9a166-2193-469f-a9f8-e9ca8fa36f00\") " pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.296833 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cca9a166-2193-469f-a9f8-e9ca8fa36f00-console-oauth-config\") pod \"console-6588db564f-xzplv\" (UID: \"cca9a166-2193-469f-a9f8-e9ca8fa36f00\") " pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.296871 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cca9a166-2193-469f-a9f8-e9ca8fa36f00-oauth-serving-cert\") pod \"console-6588db564f-xzplv\" (UID: \"cca9a166-2193-469f-a9f8-e9ca8fa36f00\") " pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.297827 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cca9a166-2193-469f-a9f8-e9ca8fa36f00-oauth-serving-cert\") pod \"console-6588db564f-xzplv\" (UID: \"cca9a166-2193-469f-a9f8-e9ca8fa36f00\") " pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.297854 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cca9a166-2193-469f-a9f8-e9ca8fa36f00-service-ca\") pod \"console-6588db564f-xzplv\" (UID: \"cca9a166-2193-469f-a9f8-e9ca8fa36f00\") " pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.298883 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cca9a166-2193-469f-a9f8-e9ca8fa36f00-console-config\") pod \"console-6588db564f-xzplv\" (UID: \"cca9a166-2193-469f-a9f8-e9ca8fa36f00\") " pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.299134 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cca9a166-2193-469f-a9f8-e9ca8fa36f00-trusted-ca-bundle\") pod \"console-6588db564f-xzplv\" (UID: \"cca9a166-2193-469f-a9f8-e9ca8fa36f00\") " pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.304706 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cca9a166-2193-469f-a9f8-e9ca8fa36f00-console-serving-cert\") pod \"console-6588db564f-xzplv\" (UID: \"cca9a166-2193-469f-a9f8-e9ca8fa36f00\") " pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.306858 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cca9a166-2193-469f-a9f8-e9ca8fa36f00-console-oauth-config\") pod \"console-6588db564f-xzplv\" (UID: \"cca9a166-2193-469f-a9f8-e9ca8fa36f00\") " pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.317797 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w99h2\" (UniqueName: \"kubernetes.io/projected/cca9a166-2193-469f-a9f8-e9ca8fa36f00-kube-api-access-w99h2\") pod \"console-6588db564f-xzplv\" (UID: \"cca9a166-2193-469f-a9f8-e9ca8fa36f00\") " pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.374354 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-k4ztv"] Jan 28 15:31:17 crc kubenswrapper[4871]: W0128 15:31:17.386525 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadecd4d5_4d6e_42e6_b6a1_25924a745bf4.slice/crio-4a178e244cd0add795694588d7cfb09d54a6b6e80f8b434f5653a7e42e15ad05 WatchSource:0}: Error finding container 4a178e244cd0add795694588d7cfb09d54a6b6e80f8b434f5653a7e42e15ad05: Status 404 returned error can't find the container with id 4a178e244cd0add795694588d7cfb09d54a6b6e80f8b434f5653a7e42e15ad05 Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.414048 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-6lx5n"] Jan 28 15:31:17 crc kubenswrapper[4871]: W0128 15:31:17.417255 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bd61983_6d96_440d_969c_6e70160a269e.slice/crio-1f64b6bab63b11e877f7c472a153ff9cbd35059cc480f70ffce3758d885f4508 WatchSource:0}: Error finding container 1f64b6bab63b11e877f7c472a153ff9cbd35059cc480f70ffce3758d885f4508: Status 404 returned error can't find the container with id 1f64b6bab63b11e877f7c472a153ff9cbd35059cc480f70ffce3758d885f4508 Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.466428 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-zwd77"] Jan 28 15:31:17 crc kubenswrapper[4871]: W0128 15:31:17.470213 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea02be43_3af2_4dbc_81f9_f456805b9b8d.slice/crio-d8f1fe8a4df73717392ffe59dc52842a53ebc2d4a760cb91fa08e52ce2af7167 WatchSource:0}: Error finding container d8f1fe8a4df73717392ffe59dc52842a53ebc2d4a760cb91fa08e52ce2af7167: Status 404 returned error can't find the container with id d8f1fe8a4df73717392ffe59dc52842a53ebc2d4a760cb91fa08e52ce2af7167 Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.488512 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.712186 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6588db564f-xzplv"] Jan 28 15:31:17 crc kubenswrapper[4871]: W0128 15:31:17.719685 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcca9a166_2193_469f_a9f8_e9ca8fa36f00.slice/crio-c88f827f2926f2bb3a2af3ca27633d9780e550ae566019c10b47a5db86e1f98c WatchSource:0}: Error finding container c88f827f2926f2bb3a2af3ca27633d9780e550ae566019c10b47a5db86e1f98c: Status 404 returned error can't find the container with id c88f827f2926f2bb3a2af3ca27633d9780e550ae566019c10b47a5db86e1f98c Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.967114 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-k4ztv" event={"ID":"adecd4d5-4d6e-42e6-b6a1-25924a745bf4","Type":"ContainerStarted","Data":"4a178e244cd0add795694588d7cfb09d54a6b6e80f8b434f5653a7e42e15ad05"} Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.967859 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zwd77" event={"ID":"ea02be43-3af2-4dbc-81f9-f456805b9b8d","Type":"ContainerStarted","Data":"d8f1fe8a4df73717392ffe59dc52842a53ebc2d4a760cb91fa08e52ce2af7167"} Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.968630 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-62t6z" event={"ID":"0d70e50a-fc4d-468c-a990-b84318b6db7d","Type":"ContainerStarted","Data":"f2a663e4c8ee8dee166c9311dbe625de814e171af3f234acaefbb47e1dcd2ba5"} Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.969399 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6lx5n" event={"ID":"3bd61983-6d96-440d-969c-6e70160a269e","Type":"ContainerStarted","Data":"1f64b6bab63b11e877f7c472a153ff9cbd35059cc480f70ffce3758d885f4508"} Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.970390 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6588db564f-xzplv" event={"ID":"cca9a166-2193-469f-a9f8-e9ca8fa36f00","Type":"ContainerStarted","Data":"2a250d49b3afe6d6dc5eee40470406994d996a4fcd824c75367eb76b5fa273ea"} Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.970410 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6588db564f-xzplv" event={"ID":"cca9a166-2193-469f-a9f8-e9ca8fa36f00","Type":"ContainerStarted","Data":"c88f827f2926f2bb3a2af3ca27633d9780e550ae566019c10b47a5db86e1f98c"} Jan 28 15:31:17 crc kubenswrapper[4871]: I0128 15:31:17.989523 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6588db564f-xzplv" podStartSLOduration=0.989503584 podStartE2EDuration="989.503584ms" podCreationTimestamp="2026-01-28 15:31:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:31:17.987040695 +0000 UTC m=+829.882879027" watchObservedRunningTime="2026-01-28 15:31:17.989503584 +0000 UTC m=+829.885341916" Jan 28 15:31:21 crc kubenswrapper[4871]: I0128 15:31:21.000302 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-62t6z" event={"ID":"0d70e50a-fc4d-468c-a990-b84318b6db7d","Type":"ContainerStarted","Data":"18d8ab2b41a85fabe7224f062cd099f299a16ff190f74cb74a5155b5076c77ce"} Jan 28 15:31:21 crc kubenswrapper[4871]: I0128 15:31:21.004405 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-62t6z" Jan 28 15:31:21 crc kubenswrapper[4871]: I0128 15:31:21.008311 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6lx5n" event={"ID":"3bd61983-6d96-440d-969c-6e70160a269e","Type":"ContainerStarted","Data":"2969b504915596168a8ec8c8e206d70bd95129a99cf7de0c9656f586a3fbbfdf"} Jan 28 15:31:21 crc kubenswrapper[4871]: I0128 15:31:21.008534 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6lx5n" Jan 28 15:31:21 crc kubenswrapper[4871]: I0128 15:31:21.011155 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-k4ztv" event={"ID":"adecd4d5-4d6e-42e6-b6a1-25924a745bf4","Type":"ContainerStarted","Data":"5d8a5ed9d0de9e5a1870d375adb49a17d3ae04c282b94339439854015a3beace"} Jan 28 15:31:21 crc kubenswrapper[4871]: I0128 15:31:21.013228 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zwd77" event={"ID":"ea02be43-3af2-4dbc-81f9-f456805b9b8d","Type":"ContainerStarted","Data":"a1cfed8df85192af06955280110c0094575075a34d7cf2b8450b65c9509348b7"} Jan 28 15:31:21 crc kubenswrapper[4871]: I0128 15:31:21.032943 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-62t6z" podStartSLOduration=1.7668728790000001 podStartE2EDuration="5.032917789s" podCreationTimestamp="2026-01-28 15:31:16 +0000 UTC" firstStartedPulling="2026-01-28 15:31:17.207686755 +0000 UTC m=+829.103525077" lastFinishedPulling="2026-01-28 15:31:20.473731665 +0000 UTC m=+832.369569987" observedRunningTime="2026-01-28 15:31:21.028981615 +0000 UTC m=+832.924819977" watchObservedRunningTime="2026-01-28 15:31:21.032917789 +0000 UTC m=+832.928756111" Jan 28 15:31:21 crc kubenswrapper[4871]: I0128 15:31:21.063800 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6lx5n" podStartSLOduration=2.038485247 podStartE2EDuration="5.063771269s" podCreationTimestamp="2026-01-28 15:31:16 +0000 UTC" firstStartedPulling="2026-01-28 15:31:17.418834914 +0000 UTC m=+829.314673236" lastFinishedPulling="2026-01-28 15:31:20.444120946 +0000 UTC m=+832.339959258" observedRunningTime="2026-01-28 15:31:21.048299988 +0000 UTC m=+832.944138320" watchObservedRunningTime="2026-01-28 15:31:21.063771269 +0000 UTC m=+832.959609621" Jan 28 15:31:21 crc kubenswrapper[4871]: I0128 15:31:21.082288 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zwd77" podStartSLOduration=2.111632019 podStartE2EDuration="5.082264856s" podCreationTimestamp="2026-01-28 15:31:16 +0000 UTC" firstStartedPulling="2026-01-28 15:31:17.473635364 +0000 UTC m=+829.369473686" lastFinishedPulling="2026-01-28 15:31:20.444268201 +0000 UTC m=+832.340106523" observedRunningTime="2026-01-28 15:31:21.074545731 +0000 UTC m=+832.970384073" watchObservedRunningTime="2026-01-28 15:31:21.082264856 +0000 UTC m=+832.978103208" Jan 28 15:31:23 crc kubenswrapper[4871]: I0128 15:31:23.028640 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-k4ztv" event={"ID":"adecd4d5-4d6e-42e6-b6a1-25924a745bf4","Type":"ContainerStarted","Data":"0ee9019c187ce5b632b861d31ff45491237610152a17d4f86203a1deb73b570b"} Jan 28 15:31:23 crc kubenswrapper[4871]: I0128 15:31:23.047902 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-k4ztv" podStartSLOduration=1.617543381 podStartE2EDuration="7.047880921s" podCreationTimestamp="2026-01-28 15:31:16 +0000 UTC" firstStartedPulling="2026-01-28 15:31:17.388670277 +0000 UTC m=+829.284508599" lastFinishedPulling="2026-01-28 15:31:22.819007817 +0000 UTC m=+834.714846139" observedRunningTime="2026-01-28 15:31:23.042866961 +0000 UTC m=+834.938705303" watchObservedRunningTime="2026-01-28 15:31:23.047880921 +0000 UTC m=+834.943719243" Jan 28 15:31:27 crc kubenswrapper[4871]: I0128 15:31:27.227879 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-62t6z" Jan 28 15:31:27 crc kubenswrapper[4871]: I0128 15:31:27.489188 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:27 crc kubenswrapper[4871]: I0128 15:31:27.489255 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:27 crc kubenswrapper[4871]: I0128 15:31:27.496903 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:28 crc kubenswrapper[4871]: I0128 15:31:28.086266 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6588db564f-xzplv" Jan 28 15:31:28 crc kubenswrapper[4871]: I0128 15:31:28.138254 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-j85fr"] Jan 28 15:31:37 crc kubenswrapper[4871]: I0128 15:31:37.154372 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6lx5n" Jan 28 15:31:43 crc kubenswrapper[4871]: I0128 15:31:43.814328 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:31:43 crc kubenswrapper[4871]: I0128 15:31:43.815254 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:31:43 crc kubenswrapper[4871]: I0128 15:31:43.815339 4871 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 15:31:43 crc kubenswrapper[4871]: I0128 15:31:43.816438 4871 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99a5b6a3a56a0129d3e0910f8bee719a8f441dd30871a64b674173f9c9123f18"} pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:31:43 crc kubenswrapper[4871]: I0128 15:31:43.816699 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" containerID="cri-o://99a5b6a3a56a0129d3e0910f8bee719a8f441dd30871a64b674173f9c9123f18" gracePeriod=600 Jan 28 15:31:45 crc kubenswrapper[4871]: I0128 15:31:45.199786 4871 generic.go:334] "Generic (PLEG): container finished" podID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerID="99a5b6a3a56a0129d3e0910f8bee719a8f441dd30871a64b674173f9c9123f18" exitCode=0 Jan 28 15:31:45 crc kubenswrapper[4871]: I0128 15:31:45.199831 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerDied","Data":"99a5b6a3a56a0129d3e0910f8bee719a8f441dd30871a64b674173f9c9123f18"} Jan 28 15:31:45 crc kubenswrapper[4871]: I0128 15:31:45.200398 4871 scope.go:117] "RemoveContainer" containerID="90d6822a584774ad2a67fa1ad8223c539293bc3a8bfaabea013dfe8e391f8ad2" Jan 28 15:31:46 crc kubenswrapper[4871]: I0128 15:31:46.210413 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerStarted","Data":"504ae14f7e055da72d55c5a96bfc70153a17e45dce0bb3e15ce3dccb6926e332"} Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.160956 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l"] Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.162497 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.165022 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.179309 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l"] Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.185749 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-j85fr" podUID="2571452b-5b45-43d1-bd39-35ef29c4fe80" containerName="console" containerID="cri-o://671c6dcd0413a2882adf49dddda82507d1829f3b1472b50446f3711bb49c15e6" gracePeriod=15 Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.358852 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sjjj\" (UniqueName: \"kubernetes.io/projected/a14e6292-a57f-4cda-98ca-3fa791f61964-kube-api-access-4sjjj\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l\" (UID: \"a14e6292-a57f-4cda-98ca-3fa791f61964\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.358951 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a14e6292-a57f-4cda-98ca-3fa791f61964-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l\" (UID: \"a14e6292-a57f-4cda-98ca-3fa791f61964\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.359029 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a14e6292-a57f-4cda-98ca-3fa791f61964-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l\" (UID: \"a14e6292-a57f-4cda-98ca-3fa791f61964\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.460194 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sjjj\" (UniqueName: \"kubernetes.io/projected/a14e6292-a57f-4cda-98ca-3fa791f61964-kube-api-access-4sjjj\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l\" (UID: \"a14e6292-a57f-4cda-98ca-3fa791f61964\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.460260 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a14e6292-a57f-4cda-98ca-3fa791f61964-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l\" (UID: \"a14e6292-a57f-4cda-98ca-3fa791f61964\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.460307 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a14e6292-a57f-4cda-98ca-3fa791f61964-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l\" (UID: \"a14e6292-a57f-4cda-98ca-3fa791f61964\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.460817 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a14e6292-a57f-4cda-98ca-3fa791f61964-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l\" (UID: \"a14e6292-a57f-4cda-98ca-3fa791f61964\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.460813 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a14e6292-a57f-4cda-98ca-3fa791f61964-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l\" (UID: \"a14e6292-a57f-4cda-98ca-3fa791f61964\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.484337 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sjjj\" (UniqueName: \"kubernetes.io/projected/a14e6292-a57f-4cda-98ca-3fa791f61964-kube-api-access-4sjjj\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l\" (UID: \"a14e6292-a57f-4cda-98ca-3fa791f61964\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.555098 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-j85fr_2571452b-5b45-43d1-bd39-35ef29c4fe80/console/0.log" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.555156 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.561005 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2571452b-5b45-43d1-bd39-35ef29c4fe80-oauth-serving-cert\") pod \"2571452b-5b45-43d1-bd39-35ef29c4fe80\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.561043 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2571452b-5b45-43d1-bd39-35ef29c4fe80-console-serving-cert\") pod \"2571452b-5b45-43d1-bd39-35ef29c4fe80\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.561099 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2571452b-5b45-43d1-bd39-35ef29c4fe80-trusted-ca-bundle\") pod \"2571452b-5b45-43d1-bd39-35ef29c4fe80\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.561117 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2571452b-5b45-43d1-bd39-35ef29c4fe80-console-oauth-config\") pod \"2571452b-5b45-43d1-bd39-35ef29c4fe80\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.561142 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2571452b-5b45-43d1-bd39-35ef29c4fe80-console-config\") pod \"2571452b-5b45-43d1-bd39-35ef29c4fe80\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.561163 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gctxv\" (UniqueName: \"kubernetes.io/projected/2571452b-5b45-43d1-bd39-35ef29c4fe80-kube-api-access-gctxv\") pod \"2571452b-5b45-43d1-bd39-35ef29c4fe80\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.561180 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2571452b-5b45-43d1-bd39-35ef29c4fe80-service-ca\") pod \"2571452b-5b45-43d1-bd39-35ef29c4fe80\" (UID: \"2571452b-5b45-43d1-bd39-35ef29c4fe80\") " Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.562146 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2571452b-5b45-43d1-bd39-35ef29c4fe80-service-ca" (OuterVolumeSpecName: "service-ca") pod "2571452b-5b45-43d1-bd39-35ef29c4fe80" (UID: "2571452b-5b45-43d1-bd39-35ef29c4fe80"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.562197 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2571452b-5b45-43d1-bd39-35ef29c4fe80-console-config" (OuterVolumeSpecName: "console-config") pod "2571452b-5b45-43d1-bd39-35ef29c4fe80" (UID: "2571452b-5b45-43d1-bd39-35ef29c4fe80"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.562390 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2571452b-5b45-43d1-bd39-35ef29c4fe80-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "2571452b-5b45-43d1-bd39-35ef29c4fe80" (UID: "2571452b-5b45-43d1-bd39-35ef29c4fe80"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.562695 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2571452b-5b45-43d1-bd39-35ef29c4fe80-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "2571452b-5b45-43d1-bd39-35ef29c4fe80" (UID: "2571452b-5b45-43d1-bd39-35ef29c4fe80"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.565251 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2571452b-5b45-43d1-bd39-35ef29c4fe80-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "2571452b-5b45-43d1-bd39-35ef29c4fe80" (UID: "2571452b-5b45-43d1-bd39-35ef29c4fe80"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.565434 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2571452b-5b45-43d1-bd39-35ef29c4fe80-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "2571452b-5b45-43d1-bd39-35ef29c4fe80" (UID: "2571452b-5b45-43d1-bd39-35ef29c4fe80"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.568031 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2571452b-5b45-43d1-bd39-35ef29c4fe80-kube-api-access-gctxv" (OuterVolumeSpecName: "kube-api-access-gctxv") pod "2571452b-5b45-43d1-bd39-35ef29c4fe80" (UID: "2571452b-5b45-43d1-bd39-35ef29c4fe80"). InnerVolumeSpecName "kube-api-access-gctxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.662221 4871 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2571452b-5b45-43d1-bd39-35ef29c4fe80-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.662257 4871 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2571452b-5b45-43d1-bd39-35ef29c4fe80-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.662266 4871 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2571452b-5b45-43d1-bd39-35ef29c4fe80-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.662275 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gctxv\" (UniqueName: \"kubernetes.io/projected/2571452b-5b45-43d1-bd39-35ef29c4fe80-kube-api-access-gctxv\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.662285 4871 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2571452b-5b45-43d1-bd39-35ef29c4fe80-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.662292 4871 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2571452b-5b45-43d1-bd39-35ef29c4fe80-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.662300 4871 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2571452b-5b45-43d1-bd39-35ef29c4fe80-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:53 crc kubenswrapper[4871]: I0128 15:31:53.780690 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l" Jan 28 15:31:54 crc kubenswrapper[4871]: I0128 15:31:54.019268 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l"] Jan 28 15:31:54 crc kubenswrapper[4871]: I0128 15:31:54.269924 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-j85fr_2571452b-5b45-43d1-bd39-35ef29c4fe80/console/0.log" Jan 28 15:31:54 crc kubenswrapper[4871]: I0128 15:31:54.270332 4871 generic.go:334] "Generic (PLEG): container finished" podID="2571452b-5b45-43d1-bd39-35ef29c4fe80" containerID="671c6dcd0413a2882adf49dddda82507d1829f3b1472b50446f3711bb49c15e6" exitCode=2 Jan 28 15:31:54 crc kubenswrapper[4871]: I0128 15:31:54.270411 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-j85fr" Jan 28 15:31:54 crc kubenswrapper[4871]: I0128 15:31:54.270467 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-j85fr" event={"ID":"2571452b-5b45-43d1-bd39-35ef29c4fe80","Type":"ContainerDied","Data":"671c6dcd0413a2882adf49dddda82507d1829f3b1472b50446f3711bb49c15e6"} Jan 28 15:31:54 crc kubenswrapper[4871]: I0128 15:31:54.270513 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-j85fr" event={"ID":"2571452b-5b45-43d1-bd39-35ef29c4fe80","Type":"ContainerDied","Data":"9b744af10254a55a98bb0ffab18f7efb716e2df3f880ee5219d058d12b952e7b"} Jan 28 15:31:54 crc kubenswrapper[4871]: I0128 15:31:54.270535 4871 scope.go:117] "RemoveContainer" containerID="671c6dcd0413a2882adf49dddda82507d1829f3b1472b50446f3711bb49c15e6" Jan 28 15:31:54 crc kubenswrapper[4871]: I0128 15:31:54.274375 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l" event={"ID":"a14e6292-a57f-4cda-98ca-3fa791f61964","Type":"ContainerStarted","Data":"2f690251531b2e4eb03310846114f99a0553682b04dc66a12625001e6d20990b"} Jan 28 15:31:54 crc kubenswrapper[4871]: I0128 15:31:54.316666 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-j85fr"] Jan 28 15:31:54 crc kubenswrapper[4871]: I0128 15:31:54.325763 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-j85fr"] Jan 28 15:31:54 crc kubenswrapper[4871]: I0128 15:31:54.424053 4871 scope.go:117] "RemoveContainer" containerID="671c6dcd0413a2882adf49dddda82507d1829f3b1472b50446f3711bb49c15e6" Jan 28 15:31:54 crc kubenswrapper[4871]: E0128 15:31:54.424507 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"671c6dcd0413a2882adf49dddda82507d1829f3b1472b50446f3711bb49c15e6\": container with ID starting with 671c6dcd0413a2882adf49dddda82507d1829f3b1472b50446f3711bb49c15e6 not found: ID does not exist" containerID="671c6dcd0413a2882adf49dddda82507d1829f3b1472b50446f3711bb49c15e6" Jan 28 15:31:54 crc kubenswrapper[4871]: I0128 15:31:54.424548 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"671c6dcd0413a2882adf49dddda82507d1829f3b1472b50446f3711bb49c15e6"} err="failed to get container status \"671c6dcd0413a2882adf49dddda82507d1829f3b1472b50446f3711bb49c15e6\": rpc error: code = NotFound desc = could not find container \"671c6dcd0413a2882adf49dddda82507d1829f3b1472b50446f3711bb49c15e6\": container with ID starting with 671c6dcd0413a2882adf49dddda82507d1829f3b1472b50446f3711bb49c15e6 not found: ID does not exist" Jan 28 15:31:54 crc kubenswrapper[4871]: I0128 15:31:54.913953 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2571452b-5b45-43d1-bd39-35ef29c4fe80" path="/var/lib/kubelet/pods/2571452b-5b45-43d1-bd39-35ef29c4fe80/volumes" Jan 28 15:31:55 crc kubenswrapper[4871]: I0128 15:31:55.285789 4871 generic.go:334] "Generic (PLEG): container finished" podID="a14e6292-a57f-4cda-98ca-3fa791f61964" containerID="6e3832fa44e40561cce09f88cff903f8761210fd59d329feb93f44f7229a14f7" exitCode=0 Jan 28 15:31:55 crc kubenswrapper[4871]: I0128 15:31:55.285862 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l" event={"ID":"a14e6292-a57f-4cda-98ca-3fa791f61964","Type":"ContainerDied","Data":"6e3832fa44e40561cce09f88cff903f8761210fd59d329feb93f44f7229a14f7"} Jan 28 15:31:59 crc kubenswrapper[4871]: I0128 15:31:59.306814 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qdjpm"] Jan 28 15:31:59 crc kubenswrapper[4871]: E0128 15:31:59.307498 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2571452b-5b45-43d1-bd39-35ef29c4fe80" containerName="console" Jan 28 15:31:59 crc kubenswrapper[4871]: I0128 15:31:59.307510 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="2571452b-5b45-43d1-bd39-35ef29c4fe80" containerName="console" Jan 28 15:31:59 crc kubenswrapper[4871]: I0128 15:31:59.307629 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="2571452b-5b45-43d1-bd39-35ef29c4fe80" containerName="console" Jan 28 15:31:59 crc kubenswrapper[4871]: I0128 15:31:59.308312 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdjpm" Jan 28 15:31:59 crc kubenswrapper[4871]: I0128 15:31:59.342111 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdjpm"] Jan 28 15:31:59 crc kubenswrapper[4871]: I0128 15:31:59.453259 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/842d67d4-d5ba-4753-8ec8-dbc7cdb6c647-catalog-content\") pod \"redhat-marketplace-qdjpm\" (UID: \"842d67d4-d5ba-4753-8ec8-dbc7cdb6c647\") " pod="openshift-marketplace/redhat-marketplace-qdjpm" Jan 28 15:31:59 crc kubenswrapper[4871]: I0128 15:31:59.453317 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/842d67d4-d5ba-4753-8ec8-dbc7cdb6c647-utilities\") pod \"redhat-marketplace-qdjpm\" (UID: \"842d67d4-d5ba-4753-8ec8-dbc7cdb6c647\") " pod="openshift-marketplace/redhat-marketplace-qdjpm" Jan 28 15:31:59 crc kubenswrapper[4871]: I0128 15:31:59.453358 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfh6s\" (UniqueName: \"kubernetes.io/projected/842d67d4-d5ba-4753-8ec8-dbc7cdb6c647-kube-api-access-bfh6s\") pod \"redhat-marketplace-qdjpm\" (UID: \"842d67d4-d5ba-4753-8ec8-dbc7cdb6c647\") " pod="openshift-marketplace/redhat-marketplace-qdjpm" Jan 28 15:31:59 crc kubenswrapper[4871]: I0128 15:31:59.554158 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfh6s\" (UniqueName: \"kubernetes.io/projected/842d67d4-d5ba-4753-8ec8-dbc7cdb6c647-kube-api-access-bfh6s\") pod \"redhat-marketplace-qdjpm\" (UID: \"842d67d4-d5ba-4753-8ec8-dbc7cdb6c647\") " pod="openshift-marketplace/redhat-marketplace-qdjpm" Jan 28 15:31:59 crc kubenswrapper[4871]: I0128 15:31:59.554236 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/842d67d4-d5ba-4753-8ec8-dbc7cdb6c647-catalog-content\") pod \"redhat-marketplace-qdjpm\" (UID: \"842d67d4-d5ba-4753-8ec8-dbc7cdb6c647\") " pod="openshift-marketplace/redhat-marketplace-qdjpm" Jan 28 15:31:59 crc kubenswrapper[4871]: I0128 15:31:59.554257 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/842d67d4-d5ba-4753-8ec8-dbc7cdb6c647-utilities\") pod \"redhat-marketplace-qdjpm\" (UID: \"842d67d4-d5ba-4753-8ec8-dbc7cdb6c647\") " pod="openshift-marketplace/redhat-marketplace-qdjpm" Jan 28 15:31:59 crc kubenswrapper[4871]: I0128 15:31:59.554738 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/842d67d4-d5ba-4753-8ec8-dbc7cdb6c647-utilities\") pod \"redhat-marketplace-qdjpm\" (UID: \"842d67d4-d5ba-4753-8ec8-dbc7cdb6c647\") " pod="openshift-marketplace/redhat-marketplace-qdjpm" Jan 28 15:31:59 crc kubenswrapper[4871]: I0128 15:31:59.554829 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/842d67d4-d5ba-4753-8ec8-dbc7cdb6c647-catalog-content\") pod \"redhat-marketplace-qdjpm\" (UID: \"842d67d4-d5ba-4753-8ec8-dbc7cdb6c647\") " pod="openshift-marketplace/redhat-marketplace-qdjpm" Jan 28 15:31:59 crc kubenswrapper[4871]: I0128 15:31:59.575169 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfh6s\" (UniqueName: \"kubernetes.io/projected/842d67d4-d5ba-4753-8ec8-dbc7cdb6c647-kube-api-access-bfh6s\") pod \"redhat-marketplace-qdjpm\" (UID: \"842d67d4-d5ba-4753-8ec8-dbc7cdb6c647\") " pod="openshift-marketplace/redhat-marketplace-qdjpm" Jan 28 15:31:59 crc kubenswrapper[4871]: I0128 15:31:59.627292 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdjpm" Jan 28 15:31:59 crc kubenswrapper[4871]: I0128 15:31:59.834078 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdjpm"] Jan 28 15:32:00 crc kubenswrapper[4871]: I0128 15:32:00.320709 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdjpm" event={"ID":"842d67d4-d5ba-4753-8ec8-dbc7cdb6c647","Type":"ContainerStarted","Data":"932cf82fa3520910ec8660dd51cdf0b78152c70ca9164bf490245a317e3de7eb"} Jan 28 15:32:00 crc kubenswrapper[4871]: I0128 15:32:00.322733 4871 generic.go:334] "Generic (PLEG): container finished" podID="a14e6292-a57f-4cda-98ca-3fa791f61964" containerID="dffb4e7a7d1de0f976188222ed4f23e7ff4af3a4c556856e727f0ac89cfb4be8" exitCode=0 Jan 28 15:32:00 crc kubenswrapper[4871]: I0128 15:32:00.322776 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l" event={"ID":"a14e6292-a57f-4cda-98ca-3fa791f61964","Type":"ContainerDied","Data":"dffb4e7a7d1de0f976188222ed4f23e7ff4af3a4c556856e727f0ac89cfb4be8"} Jan 28 15:32:01 crc kubenswrapper[4871]: I0128 15:32:01.333925 4871 generic.go:334] "Generic (PLEG): container finished" podID="842d67d4-d5ba-4753-8ec8-dbc7cdb6c647" containerID="13cf1f7bd04410372c5243d93f4dd62868256916c3164ec10e8ef6bc5fde4b17" exitCode=0 Jan 28 15:32:01 crc kubenswrapper[4871]: I0128 15:32:01.333976 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdjpm" event={"ID":"842d67d4-d5ba-4753-8ec8-dbc7cdb6c647","Type":"ContainerDied","Data":"13cf1f7bd04410372c5243d93f4dd62868256916c3164ec10e8ef6bc5fde4b17"} Jan 28 15:32:02 crc kubenswrapper[4871]: I0128 15:32:02.341216 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdjpm" event={"ID":"842d67d4-d5ba-4753-8ec8-dbc7cdb6c647","Type":"ContainerStarted","Data":"2de1df16ae166d158487e00c68770b20620b68b4b51327f70d19c3c7452dc864"} Jan 28 15:32:02 crc kubenswrapper[4871]: I0128 15:32:02.345274 4871 generic.go:334] "Generic (PLEG): container finished" podID="a14e6292-a57f-4cda-98ca-3fa791f61964" containerID="7570f78a6b15bb6151a5fd40805e59aa3c5c79d972df583b0c1442bede41629b" exitCode=0 Jan 28 15:32:02 crc kubenswrapper[4871]: I0128 15:32:02.345325 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l" event={"ID":"a14e6292-a57f-4cda-98ca-3fa791f61964","Type":"ContainerDied","Data":"7570f78a6b15bb6151a5fd40805e59aa3c5c79d972df583b0c1442bede41629b"} Jan 28 15:32:03 crc kubenswrapper[4871]: I0128 15:32:03.369925 4871 generic.go:334] "Generic (PLEG): container finished" podID="842d67d4-d5ba-4753-8ec8-dbc7cdb6c647" containerID="2de1df16ae166d158487e00c68770b20620b68b4b51327f70d19c3c7452dc864" exitCode=0 Jan 28 15:32:03 crc kubenswrapper[4871]: I0128 15:32:03.370067 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdjpm" event={"ID":"842d67d4-d5ba-4753-8ec8-dbc7cdb6c647","Type":"ContainerDied","Data":"2de1df16ae166d158487e00c68770b20620b68b4b51327f70d19c3c7452dc864"} Jan 28 15:32:03 crc kubenswrapper[4871]: I0128 15:32:03.696229 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l" Jan 28 15:32:03 crc kubenswrapper[4871]: I0128 15:32:03.720338 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a14e6292-a57f-4cda-98ca-3fa791f61964-bundle\") pod \"a14e6292-a57f-4cda-98ca-3fa791f61964\" (UID: \"a14e6292-a57f-4cda-98ca-3fa791f61964\") " Jan 28 15:32:03 crc kubenswrapper[4871]: I0128 15:32:03.720449 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sjjj\" (UniqueName: \"kubernetes.io/projected/a14e6292-a57f-4cda-98ca-3fa791f61964-kube-api-access-4sjjj\") pod \"a14e6292-a57f-4cda-98ca-3fa791f61964\" (UID: \"a14e6292-a57f-4cda-98ca-3fa791f61964\") " Jan 28 15:32:03 crc kubenswrapper[4871]: I0128 15:32:03.720515 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a14e6292-a57f-4cda-98ca-3fa791f61964-util\") pod \"a14e6292-a57f-4cda-98ca-3fa791f61964\" (UID: \"a14e6292-a57f-4cda-98ca-3fa791f61964\") " Jan 28 15:32:03 crc kubenswrapper[4871]: I0128 15:32:03.721519 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a14e6292-a57f-4cda-98ca-3fa791f61964-bundle" (OuterVolumeSpecName: "bundle") pod "a14e6292-a57f-4cda-98ca-3fa791f61964" (UID: "a14e6292-a57f-4cda-98ca-3fa791f61964"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:32:03 crc kubenswrapper[4871]: I0128 15:32:03.726800 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a14e6292-a57f-4cda-98ca-3fa791f61964-kube-api-access-4sjjj" (OuterVolumeSpecName: "kube-api-access-4sjjj") pod "a14e6292-a57f-4cda-98ca-3fa791f61964" (UID: "a14e6292-a57f-4cda-98ca-3fa791f61964"). InnerVolumeSpecName "kube-api-access-4sjjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:32:03 crc kubenswrapper[4871]: I0128 15:32:03.731615 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a14e6292-a57f-4cda-98ca-3fa791f61964-util" (OuterVolumeSpecName: "util") pod "a14e6292-a57f-4cda-98ca-3fa791f61964" (UID: "a14e6292-a57f-4cda-98ca-3fa791f61964"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:32:03 crc kubenswrapper[4871]: I0128 15:32:03.824243 4871 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a14e6292-a57f-4cda-98ca-3fa791f61964-util\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:03 crc kubenswrapper[4871]: I0128 15:32:03.824305 4871 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a14e6292-a57f-4cda-98ca-3fa791f61964-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:03 crc kubenswrapper[4871]: I0128 15:32:03.824318 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sjjj\" (UniqueName: \"kubernetes.io/projected/a14e6292-a57f-4cda-98ca-3fa791f61964-kube-api-access-4sjjj\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:04 crc kubenswrapper[4871]: I0128 15:32:04.377964 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdjpm" event={"ID":"842d67d4-d5ba-4753-8ec8-dbc7cdb6c647","Type":"ContainerStarted","Data":"3f7625c7985800727926bb1ecc1db660eeed9d62fd90f03df88bdd6931f521cd"} Jan 28 15:32:04 crc kubenswrapper[4871]: I0128 15:32:04.380242 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l" event={"ID":"a14e6292-a57f-4cda-98ca-3fa791f61964","Type":"ContainerDied","Data":"2f690251531b2e4eb03310846114f99a0553682b04dc66a12625001e6d20990b"} Jan 28 15:32:04 crc kubenswrapper[4871]: I0128 15:32:04.380286 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f690251531b2e4eb03310846114f99a0553682b04dc66a12625001e6d20990b" Jan 28 15:32:04 crc kubenswrapper[4871]: I0128 15:32:04.380311 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l" Jan 28 15:32:04 crc kubenswrapper[4871]: I0128 15:32:04.398687 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qdjpm" podStartSLOduration=2.825108486 podStartE2EDuration="5.39866562s" podCreationTimestamp="2026-01-28 15:31:59 +0000 UTC" firstStartedPulling="2026-01-28 15:32:01.338877337 +0000 UTC m=+873.234715679" lastFinishedPulling="2026-01-28 15:32:03.912434441 +0000 UTC m=+875.808272813" observedRunningTime="2026-01-28 15:32:04.394373925 +0000 UTC m=+876.290212247" watchObservedRunningTime="2026-01-28 15:32:04.39866562 +0000 UTC m=+876.294503942" Jan 28 15:32:09 crc kubenswrapper[4871]: I0128 15:32:09.628569 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qdjpm" Jan 28 15:32:09 crc kubenswrapper[4871]: I0128 15:32:09.628910 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qdjpm" Jan 28 15:32:09 crc kubenswrapper[4871]: I0128 15:32:09.673570 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qdjpm" Jan 28 15:32:10 crc kubenswrapper[4871]: I0128 15:32:10.477181 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qdjpm" Jan 28 15:32:12 crc kubenswrapper[4871]: I0128 15:32:12.706456 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdjpm"] Jan 28 15:32:12 crc kubenswrapper[4871]: I0128 15:32:12.707819 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qdjpm" podUID="842d67d4-d5ba-4753-8ec8-dbc7cdb6c647" containerName="registry-server" containerID="cri-o://3f7625c7985800727926bb1ecc1db660eeed9d62fd90f03df88bdd6931f521cd" gracePeriod=2 Jan 28 15:32:13 crc kubenswrapper[4871]: I0128 15:32:13.436356 4871 generic.go:334] "Generic (PLEG): container finished" podID="842d67d4-d5ba-4753-8ec8-dbc7cdb6c647" containerID="3f7625c7985800727926bb1ecc1db660eeed9d62fd90f03df88bdd6931f521cd" exitCode=0 Jan 28 15:32:13 crc kubenswrapper[4871]: I0128 15:32:13.436445 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdjpm" event={"ID":"842d67d4-d5ba-4753-8ec8-dbc7cdb6c647","Type":"ContainerDied","Data":"3f7625c7985800727926bb1ecc1db660eeed9d62fd90f03df88bdd6931f521cd"} Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.004393 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6fbbb975c4-sgkdk"] Jan 28 15:32:14 crc kubenswrapper[4871]: E0128 15:32:14.004605 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a14e6292-a57f-4cda-98ca-3fa791f61964" containerName="util" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.004618 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="a14e6292-a57f-4cda-98ca-3fa791f61964" containerName="util" Jan 28 15:32:14 crc kubenswrapper[4871]: E0128 15:32:14.004633 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a14e6292-a57f-4cda-98ca-3fa791f61964" containerName="extract" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.004641 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="a14e6292-a57f-4cda-98ca-3fa791f61964" containerName="extract" Jan 28 15:32:14 crc kubenswrapper[4871]: E0128 15:32:14.004652 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a14e6292-a57f-4cda-98ca-3fa791f61964" containerName="pull" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.004658 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="a14e6292-a57f-4cda-98ca-3fa791f61964" containerName="pull" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.004753 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="a14e6292-a57f-4cda-98ca-3fa791f61964" containerName="extract" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.005096 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6fbbb975c4-sgkdk" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.008161 4871 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.008749 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.009848 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.010015 4871 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-rxqsm" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.025519 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6fbbb975c4-sgkdk"] Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.027674 4871 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.062986 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kjt9\" (UniqueName: \"kubernetes.io/projected/58432f54-c624-4b9e-a13d-f63b16f88543-kube-api-access-2kjt9\") pod \"metallb-operator-controller-manager-6fbbb975c4-sgkdk\" (UID: \"58432f54-c624-4b9e-a13d-f63b16f88543\") " pod="metallb-system/metallb-operator-controller-manager-6fbbb975c4-sgkdk" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.063302 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/58432f54-c624-4b9e-a13d-f63b16f88543-webhook-cert\") pod \"metallb-operator-controller-manager-6fbbb975c4-sgkdk\" (UID: \"58432f54-c624-4b9e-a13d-f63b16f88543\") " pod="metallb-system/metallb-operator-controller-manager-6fbbb975c4-sgkdk" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.063363 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/58432f54-c624-4b9e-a13d-f63b16f88543-apiservice-cert\") pod \"metallb-operator-controller-manager-6fbbb975c4-sgkdk\" (UID: \"58432f54-c624-4b9e-a13d-f63b16f88543\") " pod="metallb-system/metallb-operator-controller-manager-6fbbb975c4-sgkdk" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.137183 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdjpm" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.163805 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/842d67d4-d5ba-4753-8ec8-dbc7cdb6c647-catalog-content\") pod \"842d67d4-d5ba-4753-8ec8-dbc7cdb6c647\" (UID: \"842d67d4-d5ba-4753-8ec8-dbc7cdb6c647\") " Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.163875 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfh6s\" (UniqueName: \"kubernetes.io/projected/842d67d4-d5ba-4753-8ec8-dbc7cdb6c647-kube-api-access-bfh6s\") pod \"842d67d4-d5ba-4753-8ec8-dbc7cdb6c647\" (UID: \"842d67d4-d5ba-4753-8ec8-dbc7cdb6c647\") " Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.163913 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/842d67d4-d5ba-4753-8ec8-dbc7cdb6c647-utilities\") pod \"842d67d4-d5ba-4753-8ec8-dbc7cdb6c647\" (UID: \"842d67d4-d5ba-4753-8ec8-dbc7cdb6c647\") " Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.164106 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/58432f54-c624-4b9e-a13d-f63b16f88543-apiservice-cert\") pod \"metallb-operator-controller-manager-6fbbb975c4-sgkdk\" (UID: \"58432f54-c624-4b9e-a13d-f63b16f88543\") " pod="metallb-system/metallb-operator-controller-manager-6fbbb975c4-sgkdk" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.164160 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kjt9\" (UniqueName: \"kubernetes.io/projected/58432f54-c624-4b9e-a13d-f63b16f88543-kube-api-access-2kjt9\") pod \"metallb-operator-controller-manager-6fbbb975c4-sgkdk\" (UID: \"58432f54-c624-4b9e-a13d-f63b16f88543\") " pod="metallb-system/metallb-operator-controller-manager-6fbbb975c4-sgkdk" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.164182 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/58432f54-c624-4b9e-a13d-f63b16f88543-webhook-cert\") pod \"metallb-operator-controller-manager-6fbbb975c4-sgkdk\" (UID: \"58432f54-c624-4b9e-a13d-f63b16f88543\") " pod="metallb-system/metallb-operator-controller-manager-6fbbb975c4-sgkdk" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.165561 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/842d67d4-d5ba-4753-8ec8-dbc7cdb6c647-utilities" (OuterVolumeSpecName: "utilities") pod "842d67d4-d5ba-4753-8ec8-dbc7cdb6c647" (UID: "842d67d4-d5ba-4753-8ec8-dbc7cdb6c647"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.187504 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/842d67d4-d5ba-4753-8ec8-dbc7cdb6c647-kube-api-access-bfh6s" (OuterVolumeSpecName: "kube-api-access-bfh6s") pod "842d67d4-d5ba-4753-8ec8-dbc7cdb6c647" (UID: "842d67d4-d5ba-4753-8ec8-dbc7cdb6c647"). InnerVolumeSpecName "kube-api-access-bfh6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.193301 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/58432f54-c624-4b9e-a13d-f63b16f88543-apiservice-cert\") pod \"metallb-operator-controller-manager-6fbbb975c4-sgkdk\" (UID: \"58432f54-c624-4b9e-a13d-f63b16f88543\") " pod="metallb-system/metallb-operator-controller-manager-6fbbb975c4-sgkdk" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.198998 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/58432f54-c624-4b9e-a13d-f63b16f88543-webhook-cert\") pod \"metallb-operator-controller-manager-6fbbb975c4-sgkdk\" (UID: \"58432f54-c624-4b9e-a13d-f63b16f88543\") " pod="metallb-system/metallb-operator-controller-manager-6fbbb975c4-sgkdk" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.209345 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kjt9\" (UniqueName: \"kubernetes.io/projected/58432f54-c624-4b9e-a13d-f63b16f88543-kube-api-access-2kjt9\") pod \"metallb-operator-controller-manager-6fbbb975c4-sgkdk\" (UID: \"58432f54-c624-4b9e-a13d-f63b16f88543\") " pod="metallb-system/metallb-operator-controller-manager-6fbbb975c4-sgkdk" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.265208 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfh6s\" (UniqueName: \"kubernetes.io/projected/842d67d4-d5ba-4753-8ec8-dbc7cdb6c647-kube-api-access-bfh6s\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.265431 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/842d67d4-d5ba-4753-8ec8-dbc7cdb6c647-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.269115 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/842d67d4-d5ba-4753-8ec8-dbc7cdb6c647-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "842d67d4-d5ba-4753-8ec8-dbc7cdb6c647" (UID: "842d67d4-d5ba-4753-8ec8-dbc7cdb6c647"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.349481 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6fbbb975c4-sgkdk" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.366724 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/842d67d4-d5ba-4753-8ec8-dbc7cdb6c647-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.386435 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-bc9b59d67-5ggd4"] Jan 28 15:32:14 crc kubenswrapper[4871]: E0128 15:32:14.386885 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="842d67d4-d5ba-4753-8ec8-dbc7cdb6c647" containerName="extract-content" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.386997 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="842d67d4-d5ba-4753-8ec8-dbc7cdb6c647" containerName="extract-content" Jan 28 15:32:14 crc kubenswrapper[4871]: E0128 15:32:14.387089 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="842d67d4-d5ba-4753-8ec8-dbc7cdb6c647" containerName="extract-utilities" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.387165 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="842d67d4-d5ba-4753-8ec8-dbc7cdb6c647" containerName="extract-utilities" Jan 28 15:32:14 crc kubenswrapper[4871]: E0128 15:32:14.387241 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="842d67d4-d5ba-4753-8ec8-dbc7cdb6c647" containerName="registry-server" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.387308 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="842d67d4-d5ba-4753-8ec8-dbc7cdb6c647" containerName="registry-server" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.387492 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="842d67d4-d5ba-4753-8ec8-dbc7cdb6c647" containerName="registry-server" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.388028 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-bc9b59d67-5ggd4" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.392095 4871 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.392319 4871 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.392474 4871 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-dllfv" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.402223 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-bc9b59d67-5ggd4"] Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.443207 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdjpm" event={"ID":"842d67d4-d5ba-4753-8ec8-dbc7cdb6c647","Type":"ContainerDied","Data":"932cf82fa3520910ec8660dd51cdf0b78152c70ca9164bf490245a317e3de7eb"} Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.443260 4871 scope.go:117] "RemoveContainer" containerID="3f7625c7985800727926bb1ecc1db660eeed9d62fd90f03df88bdd6931f521cd" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.443379 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdjpm" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.466581 4871 scope.go:117] "RemoveContainer" containerID="2de1df16ae166d158487e00c68770b20620b68b4b51327f70d19c3c7452dc864" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.467615 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/77871b10-57a8-4121-b3db-3389e942bc8b-apiservice-cert\") pod \"metallb-operator-webhook-server-bc9b59d67-5ggd4\" (UID: \"77871b10-57a8-4121-b3db-3389e942bc8b\") " pod="metallb-system/metallb-operator-webhook-server-bc9b59d67-5ggd4" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.467764 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/77871b10-57a8-4121-b3db-3389e942bc8b-webhook-cert\") pod \"metallb-operator-webhook-server-bc9b59d67-5ggd4\" (UID: \"77871b10-57a8-4121-b3db-3389e942bc8b\") " pod="metallb-system/metallb-operator-webhook-server-bc9b59d67-5ggd4" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.467908 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dpqv\" (UniqueName: \"kubernetes.io/projected/77871b10-57a8-4121-b3db-3389e942bc8b-kube-api-access-7dpqv\") pod \"metallb-operator-webhook-server-bc9b59d67-5ggd4\" (UID: \"77871b10-57a8-4121-b3db-3389e942bc8b\") " pod="metallb-system/metallb-operator-webhook-server-bc9b59d67-5ggd4" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.515770 4871 scope.go:117] "RemoveContainer" containerID="13cf1f7bd04410372c5243d93f4dd62868256916c3164ec10e8ef6bc5fde4b17" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.519222 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdjpm"] Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.522300 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdjpm"] Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.573446 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dpqv\" (UniqueName: \"kubernetes.io/projected/77871b10-57a8-4121-b3db-3389e942bc8b-kube-api-access-7dpqv\") pod \"metallb-operator-webhook-server-bc9b59d67-5ggd4\" (UID: \"77871b10-57a8-4121-b3db-3389e942bc8b\") " pod="metallb-system/metallb-operator-webhook-server-bc9b59d67-5ggd4" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.573521 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/77871b10-57a8-4121-b3db-3389e942bc8b-apiservice-cert\") pod \"metallb-operator-webhook-server-bc9b59d67-5ggd4\" (UID: \"77871b10-57a8-4121-b3db-3389e942bc8b\") " pod="metallb-system/metallb-operator-webhook-server-bc9b59d67-5ggd4" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.573579 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/77871b10-57a8-4121-b3db-3389e942bc8b-webhook-cert\") pod \"metallb-operator-webhook-server-bc9b59d67-5ggd4\" (UID: \"77871b10-57a8-4121-b3db-3389e942bc8b\") " pod="metallb-system/metallb-operator-webhook-server-bc9b59d67-5ggd4" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.580031 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/77871b10-57a8-4121-b3db-3389e942bc8b-webhook-cert\") pod \"metallb-operator-webhook-server-bc9b59d67-5ggd4\" (UID: \"77871b10-57a8-4121-b3db-3389e942bc8b\") " pod="metallb-system/metallb-operator-webhook-server-bc9b59d67-5ggd4" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.581185 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/77871b10-57a8-4121-b3db-3389e942bc8b-apiservice-cert\") pod \"metallb-operator-webhook-server-bc9b59d67-5ggd4\" (UID: \"77871b10-57a8-4121-b3db-3389e942bc8b\") " pod="metallb-system/metallb-operator-webhook-server-bc9b59d67-5ggd4" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.605193 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dpqv\" (UniqueName: \"kubernetes.io/projected/77871b10-57a8-4121-b3db-3389e942bc8b-kube-api-access-7dpqv\") pod \"metallb-operator-webhook-server-bc9b59d67-5ggd4\" (UID: \"77871b10-57a8-4121-b3db-3389e942bc8b\") " pod="metallb-system/metallb-operator-webhook-server-bc9b59d67-5ggd4" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.621119 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6fbbb975c4-sgkdk"] Jan 28 15:32:14 crc kubenswrapper[4871]: W0128 15:32:14.632389 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58432f54_c624_4b9e_a13d_f63b16f88543.slice/crio-ed3705796c79c3d1f40a4b3649a4c4093d911e880ccecbeed21700997002732f WatchSource:0}: Error finding container ed3705796c79c3d1f40a4b3649a4c4093d911e880ccecbeed21700997002732f: Status 404 returned error can't find the container with id ed3705796c79c3d1f40a4b3649a4c4093d911e880ccecbeed21700997002732f Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.709481 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-bc9b59d67-5ggd4" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.970997 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="842d67d4-d5ba-4753-8ec8-dbc7cdb6c647" path="/var/lib/kubelet/pods/842d67d4-d5ba-4753-8ec8-dbc7cdb6c647/volumes" Jan 28 15:32:14 crc kubenswrapper[4871]: I0128 15:32:14.988942 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-bc9b59d67-5ggd4"] Jan 28 15:32:14 crc kubenswrapper[4871]: W0128 15:32:14.996797 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77871b10_57a8_4121_b3db_3389e942bc8b.slice/crio-5a1ed0e490436d4e081bebce33949316049bb5a18e416d86a0ce41520347d2e2 WatchSource:0}: Error finding container 5a1ed0e490436d4e081bebce33949316049bb5a18e416d86a0ce41520347d2e2: Status 404 returned error can't find the container with id 5a1ed0e490436d4e081bebce33949316049bb5a18e416d86a0ce41520347d2e2 Jan 28 15:32:15 crc kubenswrapper[4871]: I0128 15:32:15.451070 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-bc9b59d67-5ggd4" event={"ID":"77871b10-57a8-4121-b3db-3389e942bc8b","Type":"ContainerStarted","Data":"5a1ed0e490436d4e081bebce33949316049bb5a18e416d86a0ce41520347d2e2"} Jan 28 15:32:15 crc kubenswrapper[4871]: I0128 15:32:15.452693 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6fbbb975c4-sgkdk" event={"ID":"58432f54-c624-4b9e-a13d-f63b16f88543","Type":"ContainerStarted","Data":"ed3705796c79c3d1f40a4b3649a4c4093d911e880ccecbeed21700997002732f"} Jan 28 15:32:21 crc kubenswrapper[4871]: I0128 15:32:21.488553 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-bc9b59d67-5ggd4" event={"ID":"77871b10-57a8-4121-b3db-3389e942bc8b","Type":"ContainerStarted","Data":"0669a11d8084140d01883c6656d01032021cafc58d1d64901a1db741d1f7992e"} Jan 28 15:32:21 crc kubenswrapper[4871]: I0128 15:32:21.489514 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-bc9b59d67-5ggd4" Jan 28 15:32:21 crc kubenswrapper[4871]: I0128 15:32:21.490404 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6fbbb975c4-sgkdk" event={"ID":"58432f54-c624-4b9e-a13d-f63b16f88543","Type":"ContainerStarted","Data":"4b9b97176a5aa97527b318b63572dc0e4e88981f05d3a79c79eb75308a4c792f"} Jan 28 15:32:21 crc kubenswrapper[4871]: I0128 15:32:21.490538 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6fbbb975c4-sgkdk" Jan 28 15:32:21 crc kubenswrapper[4871]: I0128 15:32:21.508877 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-bc9b59d67-5ggd4" podStartSLOduration=2.017779571 podStartE2EDuration="7.508861646s" podCreationTimestamp="2026-01-28 15:32:14 +0000 UTC" firstStartedPulling="2026-01-28 15:32:15.000978348 +0000 UTC m=+886.896816670" lastFinishedPulling="2026-01-28 15:32:20.492060423 +0000 UTC m=+892.387898745" observedRunningTime="2026-01-28 15:32:21.507709119 +0000 UTC m=+893.403547461" watchObservedRunningTime="2026-01-28 15:32:21.508861646 +0000 UTC m=+893.404699968" Jan 28 15:32:21 crc kubenswrapper[4871]: I0128 15:32:21.540763 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6fbbb975c4-sgkdk" podStartSLOduration=2.718195378 podStartE2EDuration="8.540740884s" podCreationTimestamp="2026-01-28 15:32:13 +0000 UTC" firstStartedPulling="2026-01-28 15:32:14.650823075 +0000 UTC m=+886.546661397" lastFinishedPulling="2026-01-28 15:32:20.473368581 +0000 UTC m=+892.369206903" observedRunningTime="2026-01-28 15:32:21.540047202 +0000 UTC m=+893.435885524" watchObservedRunningTime="2026-01-28 15:32:21.540740884 +0000 UTC m=+893.436579206" Jan 28 15:32:24 crc kubenswrapper[4871]: I0128 15:32:24.713159 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qhtc4"] Jan 28 15:32:24 crc kubenswrapper[4871]: I0128 15:32:24.714818 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qhtc4" Jan 28 15:32:24 crc kubenswrapper[4871]: I0128 15:32:24.729026 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qhtc4"] Jan 28 15:32:24 crc kubenswrapper[4871]: I0128 15:32:24.830721 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9mf7\" (UniqueName: \"kubernetes.io/projected/f12e388e-950a-4903-aaad-d3707a99b798-kube-api-access-q9mf7\") pod \"certified-operators-qhtc4\" (UID: \"f12e388e-950a-4903-aaad-d3707a99b798\") " pod="openshift-marketplace/certified-operators-qhtc4" Jan 28 15:32:24 crc kubenswrapper[4871]: I0128 15:32:24.830789 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f12e388e-950a-4903-aaad-d3707a99b798-catalog-content\") pod \"certified-operators-qhtc4\" (UID: \"f12e388e-950a-4903-aaad-d3707a99b798\") " pod="openshift-marketplace/certified-operators-qhtc4" Jan 28 15:32:24 crc kubenswrapper[4871]: I0128 15:32:24.830838 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f12e388e-950a-4903-aaad-d3707a99b798-utilities\") pod \"certified-operators-qhtc4\" (UID: \"f12e388e-950a-4903-aaad-d3707a99b798\") " pod="openshift-marketplace/certified-operators-qhtc4" Jan 28 15:32:24 crc kubenswrapper[4871]: I0128 15:32:24.932389 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9mf7\" (UniqueName: \"kubernetes.io/projected/f12e388e-950a-4903-aaad-d3707a99b798-kube-api-access-q9mf7\") pod \"certified-operators-qhtc4\" (UID: \"f12e388e-950a-4903-aaad-d3707a99b798\") " pod="openshift-marketplace/certified-operators-qhtc4" Jan 28 15:32:24 crc kubenswrapper[4871]: I0128 15:32:24.932445 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f12e388e-950a-4903-aaad-d3707a99b798-catalog-content\") pod \"certified-operators-qhtc4\" (UID: \"f12e388e-950a-4903-aaad-d3707a99b798\") " pod="openshift-marketplace/certified-operators-qhtc4" Jan 28 15:32:24 crc kubenswrapper[4871]: I0128 15:32:24.932481 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f12e388e-950a-4903-aaad-d3707a99b798-utilities\") pod \"certified-operators-qhtc4\" (UID: \"f12e388e-950a-4903-aaad-d3707a99b798\") " pod="openshift-marketplace/certified-operators-qhtc4" Jan 28 15:32:24 crc kubenswrapper[4871]: I0128 15:32:24.933026 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f12e388e-950a-4903-aaad-d3707a99b798-utilities\") pod \"certified-operators-qhtc4\" (UID: \"f12e388e-950a-4903-aaad-d3707a99b798\") " pod="openshift-marketplace/certified-operators-qhtc4" Jan 28 15:32:24 crc kubenswrapper[4871]: I0128 15:32:24.933119 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f12e388e-950a-4903-aaad-d3707a99b798-catalog-content\") pod \"certified-operators-qhtc4\" (UID: \"f12e388e-950a-4903-aaad-d3707a99b798\") " pod="openshift-marketplace/certified-operators-qhtc4" Jan 28 15:32:24 crc kubenswrapper[4871]: I0128 15:32:24.955042 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9mf7\" (UniqueName: \"kubernetes.io/projected/f12e388e-950a-4903-aaad-d3707a99b798-kube-api-access-q9mf7\") pod \"certified-operators-qhtc4\" (UID: \"f12e388e-950a-4903-aaad-d3707a99b798\") " pod="openshift-marketplace/certified-operators-qhtc4" Jan 28 15:32:25 crc kubenswrapper[4871]: I0128 15:32:25.036988 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qhtc4" Jan 28 15:32:25 crc kubenswrapper[4871]: I0128 15:32:25.315217 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qhtc4"] Jan 28 15:32:25 crc kubenswrapper[4871]: I0128 15:32:25.515607 4871 generic.go:334] "Generic (PLEG): container finished" podID="f12e388e-950a-4903-aaad-d3707a99b798" containerID="1f6ed394b50d2f2e386a244e4b54825fc8d05a2e36db473c1f81070da243bad6" exitCode=0 Jan 28 15:32:25 crc kubenswrapper[4871]: I0128 15:32:25.515646 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhtc4" event={"ID":"f12e388e-950a-4903-aaad-d3707a99b798","Type":"ContainerDied","Data":"1f6ed394b50d2f2e386a244e4b54825fc8d05a2e36db473c1f81070da243bad6"} Jan 28 15:32:25 crc kubenswrapper[4871]: I0128 15:32:25.515676 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhtc4" event={"ID":"f12e388e-950a-4903-aaad-d3707a99b798","Type":"ContainerStarted","Data":"cfa4d5dd56a0dae88f49c4d51ae5b6468c68a0585a0dadff51b7fbcf2b48affe"} Jan 28 15:32:28 crc kubenswrapper[4871]: I0128 15:32:28.534976 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhtc4" event={"ID":"f12e388e-950a-4903-aaad-d3707a99b798","Type":"ContainerStarted","Data":"105b6705e18bb68792d3aceffffede1088dfce7b38df75403a0082fee1b91b63"} Jan 28 15:32:29 crc kubenswrapper[4871]: I0128 15:32:29.542937 4871 generic.go:334] "Generic (PLEG): container finished" podID="f12e388e-950a-4903-aaad-d3707a99b798" containerID="105b6705e18bb68792d3aceffffede1088dfce7b38df75403a0082fee1b91b63" exitCode=0 Jan 28 15:32:29 crc kubenswrapper[4871]: I0128 15:32:29.542983 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhtc4" event={"ID":"f12e388e-950a-4903-aaad-d3707a99b798","Type":"ContainerDied","Data":"105b6705e18bb68792d3aceffffede1088dfce7b38df75403a0082fee1b91b63"} Jan 28 15:32:30 crc kubenswrapper[4871]: I0128 15:32:30.509059 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4w7mf"] Jan 28 15:32:30 crc kubenswrapper[4871]: I0128 15:32:30.510560 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4w7mf" Jan 28 15:32:30 crc kubenswrapper[4871]: I0128 15:32:30.523918 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4w7mf"] Jan 28 15:32:30 crc kubenswrapper[4871]: I0128 15:32:30.554724 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhtc4" event={"ID":"f12e388e-950a-4903-aaad-d3707a99b798","Type":"ContainerStarted","Data":"47949614656c7f5056c1478b41a46adcbc6f84c89b2c5e2dbae857f41d3b45a7"} Jan 28 15:32:30 crc kubenswrapper[4871]: I0128 15:32:30.589655 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qhtc4" podStartSLOduration=1.99423028 podStartE2EDuration="6.589637156s" podCreationTimestamp="2026-01-28 15:32:24 +0000 UTC" firstStartedPulling="2026-01-28 15:32:25.517092648 +0000 UTC m=+897.412930970" lastFinishedPulling="2026-01-28 15:32:30.112499514 +0000 UTC m=+902.008337846" observedRunningTime="2026-01-28 15:32:30.573419323 +0000 UTC m=+902.469257645" watchObservedRunningTime="2026-01-28 15:32:30.589637156 +0000 UTC m=+902.485475478" Jan 28 15:32:30 crc kubenswrapper[4871]: I0128 15:32:30.603598 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bb8844f-1f3a-4cca-b905-42bc857c31aa-utilities\") pod \"community-operators-4w7mf\" (UID: \"6bb8844f-1f3a-4cca-b905-42bc857c31aa\") " pod="openshift-marketplace/community-operators-4w7mf" Jan 28 15:32:30 crc kubenswrapper[4871]: I0128 15:32:30.603643 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bb8844f-1f3a-4cca-b905-42bc857c31aa-catalog-content\") pod \"community-operators-4w7mf\" (UID: \"6bb8844f-1f3a-4cca-b905-42bc857c31aa\") " pod="openshift-marketplace/community-operators-4w7mf" Jan 28 15:32:30 crc kubenswrapper[4871]: I0128 15:32:30.603691 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6bkv\" (UniqueName: \"kubernetes.io/projected/6bb8844f-1f3a-4cca-b905-42bc857c31aa-kube-api-access-x6bkv\") pod \"community-operators-4w7mf\" (UID: \"6bb8844f-1f3a-4cca-b905-42bc857c31aa\") " pod="openshift-marketplace/community-operators-4w7mf" Jan 28 15:32:30 crc kubenswrapper[4871]: I0128 15:32:30.704641 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bb8844f-1f3a-4cca-b905-42bc857c31aa-utilities\") pod \"community-operators-4w7mf\" (UID: \"6bb8844f-1f3a-4cca-b905-42bc857c31aa\") " pod="openshift-marketplace/community-operators-4w7mf" Jan 28 15:32:30 crc kubenswrapper[4871]: I0128 15:32:30.704689 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bb8844f-1f3a-4cca-b905-42bc857c31aa-catalog-content\") pod \"community-operators-4w7mf\" (UID: \"6bb8844f-1f3a-4cca-b905-42bc857c31aa\") " pod="openshift-marketplace/community-operators-4w7mf" Jan 28 15:32:30 crc kubenswrapper[4871]: I0128 15:32:30.704742 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6bkv\" (UniqueName: \"kubernetes.io/projected/6bb8844f-1f3a-4cca-b905-42bc857c31aa-kube-api-access-x6bkv\") pod \"community-operators-4w7mf\" (UID: \"6bb8844f-1f3a-4cca-b905-42bc857c31aa\") " pod="openshift-marketplace/community-operators-4w7mf" Jan 28 15:32:30 crc kubenswrapper[4871]: I0128 15:32:30.705134 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bb8844f-1f3a-4cca-b905-42bc857c31aa-utilities\") pod \"community-operators-4w7mf\" (UID: \"6bb8844f-1f3a-4cca-b905-42bc857c31aa\") " pod="openshift-marketplace/community-operators-4w7mf" Jan 28 15:32:30 crc kubenswrapper[4871]: I0128 15:32:30.705254 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bb8844f-1f3a-4cca-b905-42bc857c31aa-catalog-content\") pod \"community-operators-4w7mf\" (UID: \"6bb8844f-1f3a-4cca-b905-42bc857c31aa\") " pod="openshift-marketplace/community-operators-4w7mf" Jan 28 15:32:30 crc kubenswrapper[4871]: I0128 15:32:30.733663 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6bkv\" (UniqueName: \"kubernetes.io/projected/6bb8844f-1f3a-4cca-b905-42bc857c31aa-kube-api-access-x6bkv\") pod \"community-operators-4w7mf\" (UID: \"6bb8844f-1f3a-4cca-b905-42bc857c31aa\") " pod="openshift-marketplace/community-operators-4w7mf" Jan 28 15:32:30 crc kubenswrapper[4871]: I0128 15:32:30.823965 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4w7mf" Jan 28 15:32:31 crc kubenswrapper[4871]: I0128 15:32:31.062346 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4w7mf"] Jan 28 15:32:31 crc kubenswrapper[4871]: I0128 15:32:31.562040 4871 generic.go:334] "Generic (PLEG): container finished" podID="6bb8844f-1f3a-4cca-b905-42bc857c31aa" containerID="a2376b2cc5413ac27b0572a31e7e2ab8b35074ee4eb649ca92e98ad89c470430" exitCode=0 Jan 28 15:32:31 crc kubenswrapper[4871]: I0128 15:32:31.562131 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4w7mf" event={"ID":"6bb8844f-1f3a-4cca-b905-42bc857c31aa","Type":"ContainerDied","Data":"a2376b2cc5413ac27b0572a31e7e2ab8b35074ee4eb649ca92e98ad89c470430"} Jan 28 15:32:31 crc kubenswrapper[4871]: I0128 15:32:31.562552 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4w7mf" event={"ID":"6bb8844f-1f3a-4cca-b905-42bc857c31aa","Type":"ContainerStarted","Data":"7ba9912ddb84c8005a9e324489442a8607c8ec8336051c4a2f3b6feb9fd855d6"} Jan 28 15:32:33 crc kubenswrapper[4871]: I0128 15:32:33.576782 4871 generic.go:334] "Generic (PLEG): container finished" podID="6bb8844f-1f3a-4cca-b905-42bc857c31aa" containerID="470a2ebfaca1fe8ce2de6399d906aa8e0b3e9e05430340d3dbd6236efa6883a4" exitCode=0 Jan 28 15:32:33 crc kubenswrapper[4871]: I0128 15:32:33.576875 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4w7mf" event={"ID":"6bb8844f-1f3a-4cca-b905-42bc857c31aa","Type":"ContainerDied","Data":"470a2ebfaca1fe8ce2de6399d906aa8e0b3e9e05430340d3dbd6236efa6883a4"} Jan 28 15:32:34 crc kubenswrapper[4871]: I0128 15:32:34.713825 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-bc9b59d67-5ggd4" Jan 28 15:32:35 crc kubenswrapper[4871]: I0128 15:32:35.037662 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qhtc4" Jan 28 15:32:35 crc kubenswrapper[4871]: I0128 15:32:35.037710 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qhtc4" Jan 28 15:32:35 crc kubenswrapper[4871]: I0128 15:32:35.082260 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qhtc4" Jan 28 15:32:35 crc kubenswrapper[4871]: I0128 15:32:35.595641 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4w7mf" event={"ID":"6bb8844f-1f3a-4cca-b905-42bc857c31aa","Type":"ContainerStarted","Data":"65b8dcb2660726130f2cc2ec683c6976a85f18399fdaefaabc15037fe6be3fbd"} Jan 28 15:32:35 crc kubenswrapper[4871]: I0128 15:32:35.622934 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4w7mf" podStartSLOduration=2.655562016 podStartE2EDuration="5.622909743s" podCreationTimestamp="2026-01-28 15:32:30 +0000 UTC" firstStartedPulling="2026-01-28 15:32:31.563664335 +0000 UTC m=+903.459502657" lastFinishedPulling="2026-01-28 15:32:34.531012052 +0000 UTC m=+906.426850384" observedRunningTime="2026-01-28 15:32:35.617516103 +0000 UTC m=+907.513354445" watchObservedRunningTime="2026-01-28 15:32:35.622909743 +0000 UTC m=+907.518748105" Jan 28 15:32:35 crc kubenswrapper[4871]: I0128 15:32:35.668251 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qhtc4" Jan 28 15:32:38 crc kubenswrapper[4871]: I0128 15:32:38.305728 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qhtc4"] Jan 28 15:32:38 crc kubenswrapper[4871]: I0128 15:32:38.306411 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qhtc4" podUID="f12e388e-950a-4903-aaad-d3707a99b798" containerName="registry-server" containerID="cri-o://47949614656c7f5056c1478b41a46adcbc6f84c89b2c5e2dbae857f41d3b45a7" gracePeriod=2 Jan 28 15:32:38 crc kubenswrapper[4871]: I0128 15:32:38.629689 4871 generic.go:334] "Generic (PLEG): container finished" podID="f12e388e-950a-4903-aaad-d3707a99b798" containerID="47949614656c7f5056c1478b41a46adcbc6f84c89b2c5e2dbae857f41d3b45a7" exitCode=0 Jan 28 15:32:38 crc kubenswrapper[4871]: I0128 15:32:38.629726 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhtc4" event={"ID":"f12e388e-950a-4903-aaad-d3707a99b798","Type":"ContainerDied","Data":"47949614656c7f5056c1478b41a46adcbc6f84c89b2c5e2dbae857f41d3b45a7"} Jan 28 15:32:38 crc kubenswrapper[4871]: I0128 15:32:38.665846 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qhtc4" Jan 28 15:32:38 crc kubenswrapper[4871]: I0128 15:32:38.744058 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9mf7\" (UniqueName: \"kubernetes.io/projected/f12e388e-950a-4903-aaad-d3707a99b798-kube-api-access-q9mf7\") pod \"f12e388e-950a-4903-aaad-d3707a99b798\" (UID: \"f12e388e-950a-4903-aaad-d3707a99b798\") " Jan 28 15:32:38 crc kubenswrapper[4871]: I0128 15:32:38.744290 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f12e388e-950a-4903-aaad-d3707a99b798-catalog-content\") pod \"f12e388e-950a-4903-aaad-d3707a99b798\" (UID: \"f12e388e-950a-4903-aaad-d3707a99b798\") " Jan 28 15:32:38 crc kubenswrapper[4871]: I0128 15:32:38.744349 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f12e388e-950a-4903-aaad-d3707a99b798-utilities\") pod \"f12e388e-950a-4903-aaad-d3707a99b798\" (UID: \"f12e388e-950a-4903-aaad-d3707a99b798\") " Jan 28 15:32:38 crc kubenswrapper[4871]: I0128 15:32:38.745110 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f12e388e-950a-4903-aaad-d3707a99b798-utilities" (OuterVolumeSpecName: "utilities") pod "f12e388e-950a-4903-aaad-d3707a99b798" (UID: "f12e388e-950a-4903-aaad-d3707a99b798"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:32:38 crc kubenswrapper[4871]: I0128 15:32:38.749012 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f12e388e-950a-4903-aaad-d3707a99b798-kube-api-access-q9mf7" (OuterVolumeSpecName: "kube-api-access-q9mf7") pod "f12e388e-950a-4903-aaad-d3707a99b798" (UID: "f12e388e-950a-4903-aaad-d3707a99b798"). InnerVolumeSpecName "kube-api-access-q9mf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:32:38 crc kubenswrapper[4871]: I0128 15:32:38.794408 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f12e388e-950a-4903-aaad-d3707a99b798-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f12e388e-950a-4903-aaad-d3707a99b798" (UID: "f12e388e-950a-4903-aaad-d3707a99b798"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:32:38 crc kubenswrapper[4871]: I0128 15:32:38.846254 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f12e388e-950a-4903-aaad-d3707a99b798-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:38 crc kubenswrapper[4871]: I0128 15:32:38.846288 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f12e388e-950a-4903-aaad-d3707a99b798-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:38 crc kubenswrapper[4871]: I0128 15:32:38.846298 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9mf7\" (UniqueName: \"kubernetes.io/projected/f12e388e-950a-4903-aaad-d3707a99b798-kube-api-access-q9mf7\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:39 crc kubenswrapper[4871]: I0128 15:32:39.637993 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhtc4" event={"ID":"f12e388e-950a-4903-aaad-d3707a99b798","Type":"ContainerDied","Data":"cfa4d5dd56a0dae88f49c4d51ae5b6468c68a0585a0dadff51b7fbcf2b48affe"} Jan 28 15:32:39 crc kubenswrapper[4871]: I0128 15:32:39.638048 4871 scope.go:117] "RemoveContainer" containerID="47949614656c7f5056c1478b41a46adcbc6f84c89b2c5e2dbae857f41d3b45a7" Jan 28 15:32:39 crc kubenswrapper[4871]: I0128 15:32:39.638096 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qhtc4" Jan 28 15:32:39 crc kubenswrapper[4871]: I0128 15:32:39.664968 4871 scope.go:117] "RemoveContainer" containerID="105b6705e18bb68792d3aceffffede1088dfce7b38df75403a0082fee1b91b63" Jan 28 15:32:39 crc kubenswrapper[4871]: I0128 15:32:39.665960 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qhtc4"] Jan 28 15:32:39 crc kubenswrapper[4871]: I0128 15:32:39.673119 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qhtc4"] Jan 28 15:32:39 crc kubenswrapper[4871]: I0128 15:32:39.684196 4871 scope.go:117] "RemoveContainer" containerID="1f6ed394b50d2f2e386a244e4b54825fc8d05a2e36db473c1f81070da243bad6" Jan 28 15:32:40 crc kubenswrapper[4871]: I0128 15:32:40.824427 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4w7mf" Jan 28 15:32:40 crc kubenswrapper[4871]: I0128 15:32:40.824810 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4w7mf" Jan 28 15:32:40 crc kubenswrapper[4871]: I0128 15:32:40.868242 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4w7mf" Jan 28 15:32:40 crc kubenswrapper[4871]: I0128 15:32:40.910755 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f12e388e-950a-4903-aaad-d3707a99b798" path="/var/lib/kubelet/pods/f12e388e-950a-4903-aaad-d3707a99b798/volumes" Jan 28 15:32:41 crc kubenswrapper[4871]: I0128 15:32:41.682481 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4w7mf" Jan 28 15:32:44 crc kubenswrapper[4871]: I0128 15:32:44.107448 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4w7mf"] Jan 28 15:32:44 crc kubenswrapper[4871]: I0128 15:32:44.665997 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4w7mf" podUID="6bb8844f-1f3a-4cca-b905-42bc857c31aa" containerName="registry-server" containerID="cri-o://65b8dcb2660726130f2cc2ec683c6976a85f18399fdaefaabc15037fe6be3fbd" gracePeriod=2 Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.016925 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4w7mf" Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.134005 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bb8844f-1f3a-4cca-b905-42bc857c31aa-utilities\") pod \"6bb8844f-1f3a-4cca-b905-42bc857c31aa\" (UID: \"6bb8844f-1f3a-4cca-b905-42bc857c31aa\") " Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.134131 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bb8844f-1f3a-4cca-b905-42bc857c31aa-catalog-content\") pod \"6bb8844f-1f3a-4cca-b905-42bc857c31aa\" (UID: \"6bb8844f-1f3a-4cca-b905-42bc857c31aa\") " Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.134193 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6bkv\" (UniqueName: \"kubernetes.io/projected/6bb8844f-1f3a-4cca-b905-42bc857c31aa-kube-api-access-x6bkv\") pod \"6bb8844f-1f3a-4cca-b905-42bc857c31aa\" (UID: \"6bb8844f-1f3a-4cca-b905-42bc857c31aa\") " Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.138080 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bb8844f-1f3a-4cca-b905-42bc857c31aa-utilities" (OuterVolumeSpecName: "utilities") pod "6bb8844f-1f3a-4cca-b905-42bc857c31aa" (UID: "6bb8844f-1f3a-4cca-b905-42bc857c31aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.145835 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bb8844f-1f3a-4cca-b905-42bc857c31aa-kube-api-access-x6bkv" (OuterVolumeSpecName: "kube-api-access-x6bkv") pod "6bb8844f-1f3a-4cca-b905-42bc857c31aa" (UID: "6bb8844f-1f3a-4cca-b905-42bc857c31aa"). InnerVolumeSpecName "kube-api-access-x6bkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.188713 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bb8844f-1f3a-4cca-b905-42bc857c31aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6bb8844f-1f3a-4cca-b905-42bc857c31aa" (UID: "6bb8844f-1f3a-4cca-b905-42bc857c31aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.236209 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bb8844f-1f3a-4cca-b905-42bc857c31aa-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.236462 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bb8844f-1f3a-4cca-b905-42bc857c31aa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.236530 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6bkv\" (UniqueName: \"kubernetes.io/projected/6bb8844f-1f3a-4cca-b905-42bc857c31aa-kube-api-access-x6bkv\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.673861 4871 generic.go:334] "Generic (PLEG): container finished" podID="6bb8844f-1f3a-4cca-b905-42bc857c31aa" containerID="65b8dcb2660726130f2cc2ec683c6976a85f18399fdaefaabc15037fe6be3fbd" exitCode=0 Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.673919 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4w7mf" Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.673929 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4w7mf" event={"ID":"6bb8844f-1f3a-4cca-b905-42bc857c31aa","Type":"ContainerDied","Data":"65b8dcb2660726130f2cc2ec683c6976a85f18399fdaefaabc15037fe6be3fbd"} Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.673979 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4w7mf" event={"ID":"6bb8844f-1f3a-4cca-b905-42bc857c31aa","Type":"ContainerDied","Data":"7ba9912ddb84c8005a9e324489442a8607c8ec8336051c4a2f3b6feb9fd855d6"} Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.674006 4871 scope.go:117] "RemoveContainer" containerID="65b8dcb2660726130f2cc2ec683c6976a85f18399fdaefaabc15037fe6be3fbd" Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.693948 4871 scope.go:117] "RemoveContainer" containerID="470a2ebfaca1fe8ce2de6399d906aa8e0b3e9e05430340d3dbd6236efa6883a4" Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.704157 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4w7mf"] Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.708328 4871 scope.go:117] "RemoveContainer" containerID="a2376b2cc5413ac27b0572a31e7e2ab8b35074ee4eb649ca92e98ad89c470430" Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.712806 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4w7mf"] Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.729286 4871 scope.go:117] "RemoveContainer" containerID="65b8dcb2660726130f2cc2ec683c6976a85f18399fdaefaabc15037fe6be3fbd" Jan 28 15:32:45 crc kubenswrapper[4871]: E0128 15:32:45.729713 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65b8dcb2660726130f2cc2ec683c6976a85f18399fdaefaabc15037fe6be3fbd\": container with ID starting with 65b8dcb2660726130f2cc2ec683c6976a85f18399fdaefaabc15037fe6be3fbd not found: ID does not exist" containerID="65b8dcb2660726130f2cc2ec683c6976a85f18399fdaefaabc15037fe6be3fbd" Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.729750 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65b8dcb2660726130f2cc2ec683c6976a85f18399fdaefaabc15037fe6be3fbd"} err="failed to get container status \"65b8dcb2660726130f2cc2ec683c6976a85f18399fdaefaabc15037fe6be3fbd\": rpc error: code = NotFound desc = could not find container \"65b8dcb2660726130f2cc2ec683c6976a85f18399fdaefaabc15037fe6be3fbd\": container with ID starting with 65b8dcb2660726130f2cc2ec683c6976a85f18399fdaefaabc15037fe6be3fbd not found: ID does not exist" Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.729781 4871 scope.go:117] "RemoveContainer" containerID="470a2ebfaca1fe8ce2de6399d906aa8e0b3e9e05430340d3dbd6236efa6883a4" Jan 28 15:32:45 crc kubenswrapper[4871]: E0128 15:32:45.730413 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"470a2ebfaca1fe8ce2de6399d906aa8e0b3e9e05430340d3dbd6236efa6883a4\": container with ID starting with 470a2ebfaca1fe8ce2de6399d906aa8e0b3e9e05430340d3dbd6236efa6883a4 not found: ID does not exist" containerID="470a2ebfaca1fe8ce2de6399d906aa8e0b3e9e05430340d3dbd6236efa6883a4" Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.730433 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"470a2ebfaca1fe8ce2de6399d906aa8e0b3e9e05430340d3dbd6236efa6883a4"} err="failed to get container status \"470a2ebfaca1fe8ce2de6399d906aa8e0b3e9e05430340d3dbd6236efa6883a4\": rpc error: code = NotFound desc = could not find container \"470a2ebfaca1fe8ce2de6399d906aa8e0b3e9e05430340d3dbd6236efa6883a4\": container with ID starting with 470a2ebfaca1fe8ce2de6399d906aa8e0b3e9e05430340d3dbd6236efa6883a4 not found: ID does not exist" Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.730448 4871 scope.go:117] "RemoveContainer" containerID="a2376b2cc5413ac27b0572a31e7e2ab8b35074ee4eb649ca92e98ad89c470430" Jan 28 15:32:45 crc kubenswrapper[4871]: E0128 15:32:45.730848 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2376b2cc5413ac27b0572a31e7e2ab8b35074ee4eb649ca92e98ad89c470430\": container with ID starting with a2376b2cc5413ac27b0572a31e7e2ab8b35074ee4eb649ca92e98ad89c470430 not found: ID does not exist" containerID="a2376b2cc5413ac27b0572a31e7e2ab8b35074ee4eb649ca92e98ad89c470430" Jan 28 15:32:45 crc kubenswrapper[4871]: I0128 15:32:45.730911 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2376b2cc5413ac27b0572a31e7e2ab8b35074ee4eb649ca92e98ad89c470430"} err="failed to get container status \"a2376b2cc5413ac27b0572a31e7e2ab8b35074ee4eb649ca92e98ad89c470430\": rpc error: code = NotFound desc = could not find container \"a2376b2cc5413ac27b0572a31e7e2ab8b35074ee4eb649ca92e98ad89c470430\": container with ID starting with a2376b2cc5413ac27b0572a31e7e2ab8b35074ee4eb649ca92e98ad89c470430 not found: ID does not exist" Jan 28 15:32:46 crc kubenswrapper[4871]: I0128 15:32:46.910007 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bb8844f-1f3a-4cca-b905-42bc857c31aa" path="/var/lib/kubelet/pods/6bb8844f-1f3a-4cca-b905-42bc857c31aa/volumes" Jan 28 15:32:54 crc kubenswrapper[4871]: I0128 15:32:54.352067 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6fbbb975c4-sgkdk" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.089909 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-zj59b"] Jan 28 15:32:55 crc kubenswrapper[4871]: E0128 15:32:55.090639 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f12e388e-950a-4903-aaad-d3707a99b798" containerName="registry-server" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.090669 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="f12e388e-950a-4903-aaad-d3707a99b798" containerName="registry-server" Jan 28 15:32:55 crc kubenswrapper[4871]: E0128 15:32:55.090687 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb8844f-1f3a-4cca-b905-42bc857c31aa" containerName="registry-server" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.090698 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb8844f-1f3a-4cca-b905-42bc857c31aa" containerName="registry-server" Jan 28 15:32:55 crc kubenswrapper[4871]: E0128 15:32:55.090717 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f12e388e-950a-4903-aaad-d3707a99b798" containerName="extract-content" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.090729 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="f12e388e-950a-4903-aaad-d3707a99b798" containerName="extract-content" Jan 28 15:32:55 crc kubenswrapper[4871]: E0128 15:32:55.090747 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb8844f-1f3a-4cca-b905-42bc857c31aa" containerName="extract-utilities" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.090759 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb8844f-1f3a-4cca-b905-42bc857c31aa" containerName="extract-utilities" Jan 28 15:32:55 crc kubenswrapper[4871]: E0128 15:32:55.090774 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f12e388e-950a-4903-aaad-d3707a99b798" containerName="extract-utilities" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.090785 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="f12e388e-950a-4903-aaad-d3707a99b798" containerName="extract-utilities" Jan 28 15:32:55 crc kubenswrapper[4871]: E0128 15:32:55.090808 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb8844f-1f3a-4cca-b905-42bc857c31aa" containerName="extract-content" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.090820 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb8844f-1f3a-4cca-b905-42bc857c31aa" containerName="extract-content" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.090993 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bb8844f-1f3a-4cca-b905-42bc857c31aa" containerName="registry-server" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.091021 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="f12e388e-950a-4903-aaad-d3707a99b798" containerName="registry-server" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.091678 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zj59b" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.093808 4871 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-8nsfm" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.095567 4871 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.102730 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-zj59b"] Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.107921 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-sxgvx"] Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.110363 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.112039 4871 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.124855 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.158558 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/11b42cc6-d8ef-4a19-8486-a430ba2f958e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-zj59b\" (UID: \"11b42cc6-d8ef-4a19-8486-a430ba2f958e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zj59b" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.158677 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5hgk\" (UniqueName: \"kubernetes.io/projected/11b42cc6-d8ef-4a19-8486-a430ba2f958e-kube-api-access-d5hgk\") pod \"frr-k8s-webhook-server-7df86c4f6c-zj59b\" (UID: \"11b42cc6-d8ef-4a19-8486-a430ba2f958e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zj59b" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.170349 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-4mfp9"] Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.171264 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-4mfp9" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.173265 4871 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.173671 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.175864 4871 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-wh8g8" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.175911 4871 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.178566 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-nhdql"] Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.179668 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-nhdql" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.183088 4871 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.187767 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-nhdql"] Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.259634 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ab15ac57-d0f0-4f23-95f1-00c1762553d1-reloader\") pod \"frr-k8s-sxgvx\" (UID: \"ab15ac57-d0f0-4f23-95f1-00c1762553d1\") " pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.259684 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/430986a8-e927-489e-888a-6f119020bdda-metrics-certs\") pod \"speaker-4mfp9\" (UID: \"430986a8-e927-489e-888a-6f119020bdda\") " pod="metallb-system/speaker-4mfp9" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.259703 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ab15ac57-d0f0-4f23-95f1-00c1762553d1-frr-sockets\") pod \"frr-k8s-sxgvx\" (UID: \"ab15ac57-d0f0-4f23-95f1-00c1762553d1\") " pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.259724 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/430986a8-e927-489e-888a-6f119020bdda-memberlist\") pod \"speaker-4mfp9\" (UID: \"430986a8-e927-489e-888a-6f119020bdda\") " pod="metallb-system/speaker-4mfp9" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.259749 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/11b42cc6-d8ef-4a19-8486-a430ba2f958e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-zj59b\" (UID: \"11b42cc6-d8ef-4a19-8486-a430ba2f958e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zj59b" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.259772 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9v6g\" (UniqueName: \"kubernetes.io/projected/ab15ac57-d0f0-4f23-95f1-00c1762553d1-kube-api-access-s9v6g\") pod \"frr-k8s-sxgvx\" (UID: \"ab15ac57-d0f0-4f23-95f1-00c1762553d1\") " pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.259791 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab15ac57-d0f0-4f23-95f1-00c1762553d1-metrics-certs\") pod \"frr-k8s-sxgvx\" (UID: \"ab15ac57-d0f0-4f23-95f1-00c1762553d1\") " pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:32:55 crc kubenswrapper[4871]: E0128 15:32:55.259923 4871 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.259948 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ab15ac57-d0f0-4f23-95f1-00c1762553d1-frr-conf\") pod \"frr-k8s-sxgvx\" (UID: \"ab15ac57-d0f0-4f23-95f1-00c1762553d1\") " pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:32:55 crc kubenswrapper[4871]: E0128 15:32:55.259979 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11b42cc6-d8ef-4a19-8486-a430ba2f958e-cert podName:11b42cc6-d8ef-4a19-8486-a430ba2f958e nodeName:}" failed. No retries permitted until 2026-01-28 15:32:55.759962711 +0000 UTC m=+927.655801033 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/11b42cc6-d8ef-4a19-8486-a430ba2f958e-cert") pod "frr-k8s-webhook-server-7df86c4f6c-zj59b" (UID: "11b42cc6-d8ef-4a19-8486-a430ba2f958e") : secret "frr-k8s-webhook-server-cert" not found Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.260455 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/430986a8-e927-489e-888a-6f119020bdda-metallb-excludel2\") pod \"speaker-4mfp9\" (UID: \"430986a8-e927-489e-888a-6f119020bdda\") " pod="metallb-system/speaker-4mfp9" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.260720 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ab15ac57-d0f0-4f23-95f1-00c1762553d1-frr-startup\") pod \"frr-k8s-sxgvx\" (UID: \"ab15ac57-d0f0-4f23-95f1-00c1762553d1\") " pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.260757 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5hgk\" (UniqueName: \"kubernetes.io/projected/11b42cc6-d8ef-4a19-8486-a430ba2f958e-kube-api-access-d5hgk\") pod \"frr-k8s-webhook-server-7df86c4f6c-zj59b\" (UID: \"11b42cc6-d8ef-4a19-8486-a430ba2f958e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zj59b" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.260839 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ab15ac57-d0f0-4f23-95f1-00c1762553d1-metrics\") pod \"frr-k8s-sxgvx\" (UID: \"ab15ac57-d0f0-4f23-95f1-00c1762553d1\") " pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.260908 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2cjl\" (UniqueName: \"kubernetes.io/projected/430986a8-e927-489e-888a-6f119020bdda-kube-api-access-k2cjl\") pod \"speaker-4mfp9\" (UID: \"430986a8-e927-489e-888a-6f119020bdda\") " pod="metallb-system/speaker-4mfp9" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.260947 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z2rx\" (UniqueName: \"kubernetes.io/projected/712268de-0e81-4e98-af1c-fb669463f095-kube-api-access-8z2rx\") pod \"controller-6968d8fdc4-nhdql\" (UID: \"712268de-0e81-4e98-af1c-fb669463f095\") " pod="metallb-system/controller-6968d8fdc4-nhdql" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.260981 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/712268de-0e81-4e98-af1c-fb669463f095-metrics-certs\") pod \"controller-6968d8fdc4-nhdql\" (UID: \"712268de-0e81-4e98-af1c-fb669463f095\") " pod="metallb-system/controller-6968d8fdc4-nhdql" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.261013 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/712268de-0e81-4e98-af1c-fb669463f095-cert\") pod \"controller-6968d8fdc4-nhdql\" (UID: \"712268de-0e81-4e98-af1c-fb669463f095\") " pod="metallb-system/controller-6968d8fdc4-nhdql" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.277408 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5hgk\" (UniqueName: \"kubernetes.io/projected/11b42cc6-d8ef-4a19-8486-a430ba2f958e-kube-api-access-d5hgk\") pod \"frr-k8s-webhook-server-7df86c4f6c-zj59b\" (UID: \"11b42cc6-d8ef-4a19-8486-a430ba2f958e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zj59b" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.362081 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/430986a8-e927-489e-888a-6f119020bdda-metallb-excludel2\") pod \"speaker-4mfp9\" (UID: \"430986a8-e927-489e-888a-6f119020bdda\") " pod="metallb-system/speaker-4mfp9" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.362139 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ab15ac57-d0f0-4f23-95f1-00c1762553d1-frr-startup\") pod \"frr-k8s-sxgvx\" (UID: \"ab15ac57-d0f0-4f23-95f1-00c1762553d1\") " pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.362162 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ab15ac57-d0f0-4f23-95f1-00c1762553d1-metrics\") pod \"frr-k8s-sxgvx\" (UID: \"ab15ac57-d0f0-4f23-95f1-00c1762553d1\") " pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.362190 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2cjl\" (UniqueName: \"kubernetes.io/projected/430986a8-e927-489e-888a-6f119020bdda-kube-api-access-k2cjl\") pod \"speaker-4mfp9\" (UID: \"430986a8-e927-489e-888a-6f119020bdda\") " pod="metallb-system/speaker-4mfp9" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.362216 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z2rx\" (UniqueName: \"kubernetes.io/projected/712268de-0e81-4e98-af1c-fb669463f095-kube-api-access-8z2rx\") pod \"controller-6968d8fdc4-nhdql\" (UID: \"712268de-0e81-4e98-af1c-fb669463f095\") " pod="metallb-system/controller-6968d8fdc4-nhdql" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.362240 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/712268de-0e81-4e98-af1c-fb669463f095-metrics-certs\") pod \"controller-6968d8fdc4-nhdql\" (UID: \"712268de-0e81-4e98-af1c-fb669463f095\") " pod="metallb-system/controller-6968d8fdc4-nhdql" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.362260 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/712268de-0e81-4e98-af1c-fb669463f095-cert\") pod \"controller-6968d8fdc4-nhdql\" (UID: \"712268de-0e81-4e98-af1c-fb669463f095\") " pod="metallb-system/controller-6968d8fdc4-nhdql" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.362303 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ab15ac57-d0f0-4f23-95f1-00c1762553d1-reloader\") pod \"frr-k8s-sxgvx\" (UID: \"ab15ac57-d0f0-4f23-95f1-00c1762553d1\") " pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.362330 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/430986a8-e927-489e-888a-6f119020bdda-metrics-certs\") pod \"speaker-4mfp9\" (UID: \"430986a8-e927-489e-888a-6f119020bdda\") " pod="metallb-system/speaker-4mfp9" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.362352 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ab15ac57-d0f0-4f23-95f1-00c1762553d1-frr-sockets\") pod \"frr-k8s-sxgvx\" (UID: \"ab15ac57-d0f0-4f23-95f1-00c1762553d1\") " pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.362380 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/430986a8-e927-489e-888a-6f119020bdda-memberlist\") pod \"speaker-4mfp9\" (UID: \"430986a8-e927-489e-888a-6f119020bdda\") " pod="metallb-system/speaker-4mfp9" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.362425 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9v6g\" (UniqueName: \"kubernetes.io/projected/ab15ac57-d0f0-4f23-95f1-00c1762553d1-kube-api-access-s9v6g\") pod \"frr-k8s-sxgvx\" (UID: \"ab15ac57-d0f0-4f23-95f1-00c1762553d1\") " pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.362450 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab15ac57-d0f0-4f23-95f1-00c1762553d1-metrics-certs\") pod \"frr-k8s-sxgvx\" (UID: \"ab15ac57-d0f0-4f23-95f1-00c1762553d1\") " pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.362685 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ab15ac57-d0f0-4f23-95f1-00c1762553d1-frr-conf\") pod \"frr-k8s-sxgvx\" (UID: \"ab15ac57-d0f0-4f23-95f1-00c1762553d1\") " pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.363021 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ab15ac57-d0f0-4f23-95f1-00c1762553d1-frr-conf\") pod \"frr-k8s-sxgvx\" (UID: \"ab15ac57-d0f0-4f23-95f1-00c1762553d1\") " pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.363142 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/430986a8-e927-489e-888a-6f119020bdda-metallb-excludel2\") pod \"speaker-4mfp9\" (UID: \"430986a8-e927-489e-888a-6f119020bdda\") " pod="metallb-system/speaker-4mfp9" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.363364 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ab15ac57-d0f0-4f23-95f1-00c1762553d1-reloader\") pod \"frr-k8s-sxgvx\" (UID: \"ab15ac57-d0f0-4f23-95f1-00c1762553d1\") " pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.363642 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ab15ac57-d0f0-4f23-95f1-00c1762553d1-metrics\") pod \"frr-k8s-sxgvx\" (UID: \"ab15ac57-d0f0-4f23-95f1-00c1762553d1\") " pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.363953 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ab15ac57-d0f0-4f23-95f1-00c1762553d1-frr-startup\") pod \"frr-k8s-sxgvx\" (UID: \"ab15ac57-d0f0-4f23-95f1-00c1762553d1\") " pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:32:55 crc kubenswrapper[4871]: E0128 15:32:55.364032 4871 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 15:32:55 crc kubenswrapper[4871]: E0128 15:32:55.364131 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/430986a8-e927-489e-888a-6f119020bdda-memberlist podName:430986a8-e927-489e-888a-6f119020bdda nodeName:}" failed. No retries permitted until 2026-01-28 15:32:55.864101057 +0000 UTC m=+927.759939419 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/430986a8-e927-489e-888a-6f119020bdda-memberlist") pod "speaker-4mfp9" (UID: "430986a8-e927-489e-888a-6f119020bdda") : secret "metallb-memberlist" not found Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.364197 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ab15ac57-d0f0-4f23-95f1-00c1762553d1-frr-sockets\") pod \"frr-k8s-sxgvx\" (UID: \"ab15ac57-d0f0-4f23-95f1-00c1762553d1\") " pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.366541 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/430986a8-e927-489e-888a-6f119020bdda-metrics-certs\") pod \"speaker-4mfp9\" (UID: \"430986a8-e927-489e-888a-6f119020bdda\") " pod="metallb-system/speaker-4mfp9" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.367232 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/712268de-0e81-4e98-af1c-fb669463f095-metrics-certs\") pod \"controller-6968d8fdc4-nhdql\" (UID: \"712268de-0e81-4e98-af1c-fb669463f095\") " pod="metallb-system/controller-6968d8fdc4-nhdql" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.368716 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab15ac57-d0f0-4f23-95f1-00c1762553d1-metrics-certs\") pod \"frr-k8s-sxgvx\" (UID: \"ab15ac57-d0f0-4f23-95f1-00c1762553d1\") " pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.368878 4871 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.381925 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/712268de-0e81-4e98-af1c-fb669463f095-cert\") pod \"controller-6968d8fdc4-nhdql\" (UID: \"712268de-0e81-4e98-af1c-fb669463f095\") " pod="metallb-system/controller-6968d8fdc4-nhdql" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.382206 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z2rx\" (UniqueName: \"kubernetes.io/projected/712268de-0e81-4e98-af1c-fb669463f095-kube-api-access-8z2rx\") pod \"controller-6968d8fdc4-nhdql\" (UID: \"712268de-0e81-4e98-af1c-fb669463f095\") " pod="metallb-system/controller-6968d8fdc4-nhdql" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.383204 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2cjl\" (UniqueName: \"kubernetes.io/projected/430986a8-e927-489e-888a-6f119020bdda-kube-api-access-k2cjl\") pod \"speaker-4mfp9\" (UID: \"430986a8-e927-489e-888a-6f119020bdda\") " pod="metallb-system/speaker-4mfp9" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.386454 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9v6g\" (UniqueName: \"kubernetes.io/projected/ab15ac57-d0f0-4f23-95f1-00c1762553d1-kube-api-access-s9v6g\") pod \"frr-k8s-sxgvx\" (UID: \"ab15ac57-d0f0-4f23-95f1-00c1762553d1\") " pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.428215 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.503226 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-nhdql" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.702955 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-nhdql"] Jan 28 15:32:55 crc kubenswrapper[4871]: W0128 15:32:55.706778 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod712268de_0e81_4e98_af1c_fb669463f095.slice/crio-8dca328e167ca1922affbcf11e014874899eec372c6c2ed613abd9eaa95221e7 WatchSource:0}: Error finding container 8dca328e167ca1922affbcf11e014874899eec372c6c2ed613abd9eaa95221e7: Status 404 returned error can't find the container with id 8dca328e167ca1922affbcf11e014874899eec372c6c2ed613abd9eaa95221e7 Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.739302 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-nhdql" event={"ID":"712268de-0e81-4e98-af1c-fb669463f095","Type":"ContainerStarted","Data":"8dca328e167ca1922affbcf11e014874899eec372c6c2ed613abd9eaa95221e7"} Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.768299 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/11b42cc6-d8ef-4a19-8486-a430ba2f958e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-zj59b\" (UID: \"11b42cc6-d8ef-4a19-8486-a430ba2f958e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zj59b" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.774003 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/11b42cc6-d8ef-4a19-8486-a430ba2f958e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-zj59b\" (UID: \"11b42cc6-d8ef-4a19-8486-a430ba2f958e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zj59b" Jan 28 15:32:55 crc kubenswrapper[4871]: I0128 15:32:55.871278 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/430986a8-e927-489e-888a-6f119020bdda-memberlist\") pod \"speaker-4mfp9\" (UID: \"430986a8-e927-489e-888a-6f119020bdda\") " pod="metallb-system/speaker-4mfp9" Jan 28 15:32:55 crc kubenswrapper[4871]: E0128 15:32:55.871446 4871 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 15:32:55 crc kubenswrapper[4871]: E0128 15:32:55.871512 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/430986a8-e927-489e-888a-6f119020bdda-memberlist podName:430986a8-e927-489e-888a-6f119020bdda nodeName:}" failed. No retries permitted until 2026-01-28 15:32:56.871490388 +0000 UTC m=+928.767328710 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/430986a8-e927-489e-888a-6f119020bdda-memberlist") pod "speaker-4mfp9" (UID: "430986a8-e927-489e-888a-6f119020bdda") : secret "metallb-memberlist" not found Jan 28 15:32:56 crc kubenswrapper[4871]: I0128 15:32:56.015221 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zj59b" Jan 28 15:32:56 crc kubenswrapper[4871]: I0128 15:32:56.211122 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-zj59b"] Jan 28 15:32:56 crc kubenswrapper[4871]: I0128 15:32:56.748290 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sxgvx" event={"ID":"ab15ac57-d0f0-4f23-95f1-00c1762553d1","Type":"ContainerStarted","Data":"1f4314276216ae22c764c8cbc856ce7db51148aee6648c6d9c1728a56e0d0657"} Jan 28 15:32:56 crc kubenswrapper[4871]: I0128 15:32:56.750184 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-nhdql" event={"ID":"712268de-0e81-4e98-af1c-fb669463f095","Type":"ContainerStarted","Data":"e9583188905af5712c502391b4af29218b3d6479846b8ead480ee98ba2214395"} Jan 28 15:32:56 crc kubenswrapper[4871]: I0128 15:32:56.750220 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-nhdql" event={"ID":"712268de-0e81-4e98-af1c-fb669463f095","Type":"ContainerStarted","Data":"5d3f6571c155ef56f5af086bb12fcb81e420862e8b20d84245a0ac27928f5d52"} Jan 28 15:32:56 crc kubenswrapper[4871]: I0128 15:32:56.750337 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-nhdql" Jan 28 15:32:56 crc kubenswrapper[4871]: I0128 15:32:56.751087 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zj59b" event={"ID":"11b42cc6-d8ef-4a19-8486-a430ba2f958e","Type":"ContainerStarted","Data":"a235e72da5d84c0f48f59e2fc4b24c6077f23bc0845b347c1ca94ae7caf1c3f0"} Jan 28 15:32:56 crc kubenswrapper[4871]: I0128 15:32:56.771627 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-nhdql" podStartSLOduration=1.771608122 podStartE2EDuration="1.771608122s" podCreationTimestamp="2026-01-28 15:32:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:32:56.766679887 +0000 UTC m=+928.662518219" watchObservedRunningTime="2026-01-28 15:32:56.771608122 +0000 UTC m=+928.667446444" Jan 28 15:32:56 crc kubenswrapper[4871]: I0128 15:32:56.884815 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/430986a8-e927-489e-888a-6f119020bdda-memberlist\") pod \"speaker-4mfp9\" (UID: \"430986a8-e927-489e-888a-6f119020bdda\") " pod="metallb-system/speaker-4mfp9" Jan 28 15:32:56 crc kubenswrapper[4871]: I0128 15:32:56.897132 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/430986a8-e927-489e-888a-6f119020bdda-memberlist\") pod \"speaker-4mfp9\" (UID: \"430986a8-e927-489e-888a-6f119020bdda\") " pod="metallb-system/speaker-4mfp9" Jan 28 15:32:56 crc kubenswrapper[4871]: I0128 15:32:56.987796 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-4mfp9" Jan 28 15:32:57 crc kubenswrapper[4871]: W0128 15:32:57.016815 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod430986a8_e927_489e_888a_6f119020bdda.slice/crio-c90e564a042dab3cdaf6a678590551e1160151f38bf207cf4d89f63637317c70 WatchSource:0}: Error finding container c90e564a042dab3cdaf6a678590551e1160151f38bf207cf4d89f63637317c70: Status 404 returned error can't find the container with id c90e564a042dab3cdaf6a678590551e1160151f38bf207cf4d89f63637317c70 Jan 28 15:32:57 crc kubenswrapper[4871]: I0128 15:32:57.762101 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4mfp9" event={"ID":"430986a8-e927-489e-888a-6f119020bdda","Type":"ContainerStarted","Data":"bc29ed64102e7a3c3331bf60213a72305e5f13b1f956c2c6a6397045ca311c34"} Jan 28 15:32:57 crc kubenswrapper[4871]: I0128 15:32:57.762160 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4mfp9" event={"ID":"430986a8-e927-489e-888a-6f119020bdda","Type":"ContainerStarted","Data":"c141f5b8cd2f68fb8b0433ca1bd599b03b105bff9af38ccaa33f553605103ffc"} Jan 28 15:32:57 crc kubenswrapper[4871]: I0128 15:32:57.762175 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4mfp9" event={"ID":"430986a8-e927-489e-888a-6f119020bdda","Type":"ContainerStarted","Data":"c90e564a042dab3cdaf6a678590551e1160151f38bf207cf4d89f63637317c70"} Jan 28 15:32:57 crc kubenswrapper[4871]: I0128 15:32:57.786478 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-4mfp9" podStartSLOduration=2.7864528760000002 podStartE2EDuration="2.786452876s" podCreationTimestamp="2026-01-28 15:32:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:32:57.779756446 +0000 UTC m=+929.675594778" watchObservedRunningTime="2026-01-28 15:32:57.786452876 +0000 UTC m=+929.682291198" Jan 28 15:33:03 crc kubenswrapper[4871]: I0128 15:33:03.809951 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zj59b" event={"ID":"11b42cc6-d8ef-4a19-8486-a430ba2f958e","Type":"ContainerStarted","Data":"63c68fac52e13027f0ab07daf42660dbed0f9d9ca5fb03e8ebae0e7ce08a6dc2"} Jan 28 15:33:03 crc kubenswrapper[4871]: I0128 15:33:03.810850 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zj59b" Jan 28 15:33:03 crc kubenswrapper[4871]: I0128 15:33:03.814143 4871 generic.go:334] "Generic (PLEG): container finished" podID="ab15ac57-d0f0-4f23-95f1-00c1762553d1" containerID="67f8491cec7b1ca1946f67594c4fb200090810c7c08f563b82fc2fe22da12e1f" exitCode=0 Jan 28 15:33:03 crc kubenswrapper[4871]: I0128 15:33:03.814184 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sxgvx" event={"ID":"ab15ac57-d0f0-4f23-95f1-00c1762553d1","Type":"ContainerDied","Data":"67f8491cec7b1ca1946f67594c4fb200090810c7c08f563b82fc2fe22da12e1f"} Jan 28 15:33:03 crc kubenswrapper[4871]: I0128 15:33:03.831630 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zj59b" podStartSLOduration=1.729684169 podStartE2EDuration="8.831613209s" podCreationTimestamp="2026-01-28 15:32:55 +0000 UTC" firstStartedPulling="2026-01-28 15:32:56.215038659 +0000 UTC m=+928.110876981" lastFinishedPulling="2026-01-28 15:33:03.316967709 +0000 UTC m=+935.212806021" observedRunningTime="2026-01-28 15:33:03.829004746 +0000 UTC m=+935.724843118" watchObservedRunningTime="2026-01-28 15:33:03.831613209 +0000 UTC m=+935.727451531" Jan 28 15:33:04 crc kubenswrapper[4871]: I0128 15:33:04.819968 4871 generic.go:334] "Generic (PLEG): container finished" podID="ab15ac57-d0f0-4f23-95f1-00c1762553d1" containerID="18e9a9d346e89399e55e164f6c3e8f7d6a5d7a162d5a4ba513a4dd419f24d084" exitCode=0 Jan 28 15:33:04 crc kubenswrapper[4871]: I0128 15:33:04.821200 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sxgvx" event={"ID":"ab15ac57-d0f0-4f23-95f1-00c1762553d1","Type":"ContainerDied","Data":"18e9a9d346e89399e55e164f6c3e8f7d6a5d7a162d5a4ba513a4dd419f24d084"} Jan 28 15:33:05 crc kubenswrapper[4871]: I0128 15:33:05.510630 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-nhdql" Jan 28 15:33:05 crc kubenswrapper[4871]: I0128 15:33:05.828910 4871 generic.go:334] "Generic (PLEG): container finished" podID="ab15ac57-d0f0-4f23-95f1-00c1762553d1" containerID="911c577c9ef2f127c2f062d87245d7f1bb23e04115380b95bdb39074855fec96" exitCode=0 Jan 28 15:33:05 crc kubenswrapper[4871]: I0128 15:33:05.828996 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sxgvx" event={"ID":"ab15ac57-d0f0-4f23-95f1-00c1762553d1","Type":"ContainerDied","Data":"911c577c9ef2f127c2f062d87245d7f1bb23e04115380b95bdb39074855fec96"} Jan 28 15:33:06 crc kubenswrapper[4871]: I0128 15:33:06.840479 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sxgvx" event={"ID":"ab15ac57-d0f0-4f23-95f1-00c1762553d1","Type":"ContainerStarted","Data":"039ab27ac6d514d22d15eea9fefa8dfde7df2f28b6cb3765f47af35c622a081a"} Jan 28 15:33:06 crc kubenswrapper[4871]: I0128 15:33:06.841067 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sxgvx" event={"ID":"ab15ac57-d0f0-4f23-95f1-00c1762553d1","Type":"ContainerStarted","Data":"7a13a608a1cdb84c2da2b58a1429848f31a9e734424581eb19afa826b178e311"} Jan 28 15:33:06 crc kubenswrapper[4871]: I0128 15:33:06.841083 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sxgvx" event={"ID":"ab15ac57-d0f0-4f23-95f1-00c1762553d1","Type":"ContainerStarted","Data":"fd877d0dfc55a4c5e30658d0c47128492cfc31739ff8b4a65b2abec52a01ce05"} Jan 28 15:33:06 crc kubenswrapper[4871]: I0128 15:33:06.841096 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sxgvx" event={"ID":"ab15ac57-d0f0-4f23-95f1-00c1762553d1","Type":"ContainerStarted","Data":"a63ee8d00355fa0d108b184fc839b86b8bc546f477b511da8ef9bab5760ffaac"} Jan 28 15:33:06 crc kubenswrapper[4871]: I0128 15:33:06.841113 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sxgvx" event={"ID":"ab15ac57-d0f0-4f23-95f1-00c1762553d1","Type":"ContainerStarted","Data":"c9c3c9101f8146df54205636915cf576d125bbc87c7dbabceea21166b82d23d0"} Jan 28 15:33:06 crc kubenswrapper[4871]: I0128 15:33:06.988804 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-4mfp9" Jan 28 15:33:07 crc kubenswrapper[4871]: I0128 15:33:07.851383 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sxgvx" event={"ID":"ab15ac57-d0f0-4f23-95f1-00c1762553d1","Type":"ContainerStarted","Data":"b81b90d655af8409d4afcc4d59df02e356facab6dab2e6bd0bca0a7e3e821ceb"} Jan 28 15:33:07 crc kubenswrapper[4871]: I0128 15:33:07.851622 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:33:07 crc kubenswrapper[4871]: I0128 15:33:07.875128 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-sxgvx" podStartSLOduration=5.327533014 podStartE2EDuration="12.875109576s" podCreationTimestamp="2026-01-28 15:32:55 +0000 UTC" firstStartedPulling="2026-01-28 15:32:55.746793053 +0000 UTC m=+927.642631375" lastFinishedPulling="2026-01-28 15:33:03.294369605 +0000 UTC m=+935.190207937" observedRunningTime="2026-01-28 15:33:07.873891027 +0000 UTC m=+939.769729389" watchObservedRunningTime="2026-01-28 15:33:07.875109576 +0000 UTC m=+939.770947898" Jan 28 15:33:10 crc kubenswrapper[4871]: I0128 15:33:10.429438 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:33:10 crc kubenswrapper[4871]: I0128 15:33:10.464674 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:33:15 crc kubenswrapper[4871]: I0128 15:33:15.431807 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-sxgvx" Jan 28 15:33:16 crc kubenswrapper[4871]: I0128 15:33:16.021056 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zj59b" Jan 28 15:33:16 crc kubenswrapper[4871]: I0128 15:33:16.992061 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-4mfp9" Jan 28 15:33:18 crc kubenswrapper[4871]: I0128 15:33:18.327025 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6"] Jan 28 15:33:18 crc kubenswrapper[4871]: I0128 15:33:18.328687 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6" Jan 28 15:33:18 crc kubenswrapper[4871]: I0128 15:33:18.331357 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 15:33:18 crc kubenswrapper[4871]: I0128 15:33:18.373888 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6"] Jan 28 15:33:18 crc kubenswrapper[4871]: I0128 15:33:18.519219 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c9e4c264-467e-4342-92f0-ee028eb94264-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6\" (UID: \"c9e4c264-467e-4342-92f0-ee028eb94264\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6" Jan 28 15:33:18 crc kubenswrapper[4871]: I0128 15:33:18.519314 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c9e4c264-467e-4342-92f0-ee028eb94264-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6\" (UID: \"c9e4c264-467e-4342-92f0-ee028eb94264\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6" Jan 28 15:33:18 crc kubenswrapper[4871]: I0128 15:33:18.519385 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztqr9\" (UniqueName: \"kubernetes.io/projected/c9e4c264-467e-4342-92f0-ee028eb94264-kube-api-access-ztqr9\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6\" (UID: \"c9e4c264-467e-4342-92f0-ee028eb94264\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6" Jan 28 15:33:18 crc kubenswrapper[4871]: I0128 15:33:18.623173 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztqr9\" (UniqueName: \"kubernetes.io/projected/c9e4c264-467e-4342-92f0-ee028eb94264-kube-api-access-ztqr9\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6\" (UID: \"c9e4c264-467e-4342-92f0-ee028eb94264\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6" Jan 28 15:33:18 crc kubenswrapper[4871]: I0128 15:33:18.623269 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c9e4c264-467e-4342-92f0-ee028eb94264-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6\" (UID: \"c9e4c264-467e-4342-92f0-ee028eb94264\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6" Jan 28 15:33:18 crc kubenswrapper[4871]: I0128 15:33:18.623334 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c9e4c264-467e-4342-92f0-ee028eb94264-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6\" (UID: \"c9e4c264-467e-4342-92f0-ee028eb94264\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6" Jan 28 15:33:18 crc kubenswrapper[4871]: I0128 15:33:18.624149 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c9e4c264-467e-4342-92f0-ee028eb94264-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6\" (UID: \"c9e4c264-467e-4342-92f0-ee028eb94264\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6" Jan 28 15:33:18 crc kubenswrapper[4871]: I0128 15:33:18.624833 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c9e4c264-467e-4342-92f0-ee028eb94264-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6\" (UID: \"c9e4c264-467e-4342-92f0-ee028eb94264\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6" Jan 28 15:33:18 crc kubenswrapper[4871]: I0128 15:33:18.666363 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztqr9\" (UniqueName: \"kubernetes.io/projected/c9e4c264-467e-4342-92f0-ee028eb94264-kube-api-access-ztqr9\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6\" (UID: \"c9e4c264-467e-4342-92f0-ee028eb94264\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6" Jan 28 15:33:18 crc kubenswrapper[4871]: I0128 15:33:18.949986 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6" Jan 28 15:33:19 crc kubenswrapper[4871]: I0128 15:33:19.157480 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6"] Jan 28 15:33:19 crc kubenswrapper[4871]: I0128 15:33:19.930576 4871 generic.go:334] "Generic (PLEG): container finished" podID="c9e4c264-467e-4342-92f0-ee028eb94264" containerID="92d56363f860c7a4895b23d15eeb989cf02c6d472980ccf60789b90052d36490" exitCode=0 Jan 28 15:33:19 crc kubenswrapper[4871]: I0128 15:33:19.930704 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6" event={"ID":"c9e4c264-467e-4342-92f0-ee028eb94264","Type":"ContainerDied","Data":"92d56363f860c7a4895b23d15eeb989cf02c6d472980ccf60789b90052d36490"} Jan 28 15:33:19 crc kubenswrapper[4871]: I0128 15:33:19.930907 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6" event={"ID":"c9e4c264-467e-4342-92f0-ee028eb94264","Type":"ContainerStarted","Data":"e91ade7633527dd83388aa1a51949fd073f970541e438ff345a7c99f7347c651"} Jan 28 15:33:23 crc kubenswrapper[4871]: I0128 15:33:23.954464 4871 generic.go:334] "Generic (PLEG): container finished" podID="c9e4c264-467e-4342-92f0-ee028eb94264" containerID="8a5c47394f1099113b7050a8f1dce341855812db9e478231f5fb5620a9e3be0c" exitCode=0 Jan 28 15:33:23 crc kubenswrapper[4871]: I0128 15:33:23.954544 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6" event={"ID":"c9e4c264-467e-4342-92f0-ee028eb94264","Type":"ContainerDied","Data":"8a5c47394f1099113b7050a8f1dce341855812db9e478231f5fb5620a9e3be0c"} Jan 28 15:33:24 crc kubenswrapper[4871]: I0128 15:33:24.963750 4871 generic.go:334] "Generic (PLEG): container finished" podID="c9e4c264-467e-4342-92f0-ee028eb94264" containerID="e77b2c06c0b3a4f7a62f452ca7d8357f229b3a5e9eeec2b37c66a1ff4927d5d7" exitCode=0 Jan 28 15:33:24 crc kubenswrapper[4871]: I0128 15:33:24.963962 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6" event={"ID":"c9e4c264-467e-4342-92f0-ee028eb94264","Type":"ContainerDied","Data":"e77b2c06c0b3a4f7a62f452ca7d8357f229b3a5e9eeec2b37c66a1ff4927d5d7"} Jan 28 15:33:26 crc kubenswrapper[4871]: I0128 15:33:26.242849 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6" Jan 28 15:33:26 crc kubenswrapper[4871]: I0128 15:33:26.332045 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztqr9\" (UniqueName: \"kubernetes.io/projected/c9e4c264-467e-4342-92f0-ee028eb94264-kube-api-access-ztqr9\") pod \"c9e4c264-467e-4342-92f0-ee028eb94264\" (UID: \"c9e4c264-467e-4342-92f0-ee028eb94264\") " Jan 28 15:33:26 crc kubenswrapper[4871]: I0128 15:33:26.332106 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c9e4c264-467e-4342-92f0-ee028eb94264-util\") pod \"c9e4c264-467e-4342-92f0-ee028eb94264\" (UID: \"c9e4c264-467e-4342-92f0-ee028eb94264\") " Jan 28 15:33:26 crc kubenswrapper[4871]: I0128 15:33:26.332140 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c9e4c264-467e-4342-92f0-ee028eb94264-bundle\") pod \"c9e4c264-467e-4342-92f0-ee028eb94264\" (UID: \"c9e4c264-467e-4342-92f0-ee028eb94264\") " Jan 28 15:33:26 crc kubenswrapper[4871]: I0128 15:33:26.333350 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9e4c264-467e-4342-92f0-ee028eb94264-bundle" (OuterVolumeSpecName: "bundle") pod "c9e4c264-467e-4342-92f0-ee028eb94264" (UID: "c9e4c264-467e-4342-92f0-ee028eb94264"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:33:26 crc kubenswrapper[4871]: I0128 15:33:26.336912 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9e4c264-467e-4342-92f0-ee028eb94264-kube-api-access-ztqr9" (OuterVolumeSpecName: "kube-api-access-ztqr9") pod "c9e4c264-467e-4342-92f0-ee028eb94264" (UID: "c9e4c264-467e-4342-92f0-ee028eb94264"). InnerVolumeSpecName "kube-api-access-ztqr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:26 crc kubenswrapper[4871]: I0128 15:33:26.341467 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9e4c264-467e-4342-92f0-ee028eb94264-util" (OuterVolumeSpecName: "util") pod "c9e4c264-467e-4342-92f0-ee028eb94264" (UID: "c9e4c264-467e-4342-92f0-ee028eb94264"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:33:26 crc kubenswrapper[4871]: I0128 15:33:26.434434 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztqr9\" (UniqueName: \"kubernetes.io/projected/c9e4c264-467e-4342-92f0-ee028eb94264-kube-api-access-ztqr9\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:26 crc kubenswrapper[4871]: I0128 15:33:26.434475 4871 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c9e4c264-467e-4342-92f0-ee028eb94264-util\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:26 crc kubenswrapper[4871]: I0128 15:33:26.434485 4871 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c9e4c264-467e-4342-92f0-ee028eb94264-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:26 crc kubenswrapper[4871]: I0128 15:33:26.985457 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6" event={"ID":"c9e4c264-467e-4342-92f0-ee028eb94264","Type":"ContainerDied","Data":"e91ade7633527dd83388aa1a51949fd073f970541e438ff345a7c99f7347c651"} Jan 28 15:33:26 crc kubenswrapper[4871]: I0128 15:33:26.985510 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e91ade7633527dd83388aa1a51949fd073f970541e438ff345a7c99f7347c651" Jan 28 15:33:26 crc kubenswrapper[4871]: I0128 15:33:26.985549 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6" Jan 28 15:33:31 crc kubenswrapper[4871]: I0128 15:33:31.325431 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z7s5x"] Jan 28 15:33:31 crc kubenswrapper[4871]: E0128 15:33:31.326138 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9e4c264-467e-4342-92f0-ee028eb94264" containerName="util" Jan 28 15:33:31 crc kubenswrapper[4871]: I0128 15:33:31.326149 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e4c264-467e-4342-92f0-ee028eb94264" containerName="util" Jan 28 15:33:31 crc kubenswrapper[4871]: E0128 15:33:31.326159 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9e4c264-467e-4342-92f0-ee028eb94264" containerName="extract" Jan 28 15:33:31 crc kubenswrapper[4871]: I0128 15:33:31.326165 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e4c264-467e-4342-92f0-ee028eb94264" containerName="extract" Jan 28 15:33:31 crc kubenswrapper[4871]: E0128 15:33:31.326181 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9e4c264-467e-4342-92f0-ee028eb94264" containerName="pull" Jan 28 15:33:31 crc kubenswrapper[4871]: I0128 15:33:31.326188 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e4c264-467e-4342-92f0-ee028eb94264" containerName="pull" Jan 28 15:33:31 crc kubenswrapper[4871]: I0128 15:33:31.326291 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9e4c264-467e-4342-92f0-ee028eb94264" containerName="extract" Jan 28 15:33:31 crc kubenswrapper[4871]: I0128 15:33:31.326742 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z7s5x" Jan 28 15:33:31 crc kubenswrapper[4871]: I0128 15:33:31.329318 4871 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-pcsgt" Jan 28 15:33:31 crc kubenswrapper[4871]: I0128 15:33:31.329381 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 28 15:33:31 crc kubenswrapper[4871]: I0128 15:33:31.329610 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 28 15:33:31 crc kubenswrapper[4871]: I0128 15:33:31.337743 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z7s5x"] Jan 28 15:33:31 crc kubenswrapper[4871]: I0128 15:33:31.402089 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2kzv\" (UniqueName: \"kubernetes.io/projected/f6e921f3-29c4-4e3c-96f1-7304bd9cb2f9-kube-api-access-z2kzv\") pod \"cert-manager-operator-controller-manager-64cf6dff88-z7s5x\" (UID: \"f6e921f3-29c4-4e3c-96f1-7304bd9cb2f9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z7s5x" Jan 28 15:33:31 crc kubenswrapper[4871]: I0128 15:33:31.402145 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f6e921f3-29c4-4e3c-96f1-7304bd9cb2f9-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-z7s5x\" (UID: \"f6e921f3-29c4-4e3c-96f1-7304bd9cb2f9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z7s5x" Jan 28 15:33:31 crc kubenswrapper[4871]: I0128 15:33:31.503714 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2kzv\" (UniqueName: \"kubernetes.io/projected/f6e921f3-29c4-4e3c-96f1-7304bd9cb2f9-kube-api-access-z2kzv\") pod \"cert-manager-operator-controller-manager-64cf6dff88-z7s5x\" (UID: \"f6e921f3-29c4-4e3c-96f1-7304bd9cb2f9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z7s5x" Jan 28 15:33:31 crc kubenswrapper[4871]: I0128 15:33:31.503802 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f6e921f3-29c4-4e3c-96f1-7304bd9cb2f9-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-z7s5x\" (UID: \"f6e921f3-29c4-4e3c-96f1-7304bd9cb2f9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z7s5x" Jan 28 15:33:31 crc kubenswrapper[4871]: I0128 15:33:31.504511 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f6e921f3-29c4-4e3c-96f1-7304bd9cb2f9-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-z7s5x\" (UID: \"f6e921f3-29c4-4e3c-96f1-7304bd9cb2f9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z7s5x" Jan 28 15:33:31 crc kubenswrapper[4871]: I0128 15:33:31.539903 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2kzv\" (UniqueName: \"kubernetes.io/projected/f6e921f3-29c4-4e3c-96f1-7304bd9cb2f9-kube-api-access-z2kzv\") pod \"cert-manager-operator-controller-manager-64cf6dff88-z7s5x\" (UID: \"f6e921f3-29c4-4e3c-96f1-7304bd9cb2f9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z7s5x" Jan 28 15:33:31 crc kubenswrapper[4871]: I0128 15:33:31.643035 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z7s5x" Jan 28 15:33:31 crc kubenswrapper[4871]: I0128 15:33:31.954654 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z7s5x"] Jan 28 15:33:31 crc kubenswrapper[4871]: W0128 15:33:31.958524 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6e921f3_29c4_4e3c_96f1_7304bd9cb2f9.slice/crio-bc6e92e1b73fb8eefba155e724b9f01af5586eeec522de6e345c68f2fb38d250 WatchSource:0}: Error finding container bc6e92e1b73fb8eefba155e724b9f01af5586eeec522de6e345c68f2fb38d250: Status 404 returned error can't find the container with id bc6e92e1b73fb8eefba155e724b9f01af5586eeec522de6e345c68f2fb38d250 Jan 28 15:33:32 crc kubenswrapper[4871]: I0128 15:33:32.016126 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z7s5x" event={"ID":"f6e921f3-29c4-4e3c-96f1-7304bd9cb2f9","Type":"ContainerStarted","Data":"bc6e92e1b73fb8eefba155e724b9f01af5586eeec522de6e345c68f2fb38d250"} Jan 28 15:33:40 crc kubenswrapper[4871]: I0128 15:33:40.070461 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z7s5x" event={"ID":"f6e921f3-29c4-4e3c-96f1-7304bd9cb2f9","Type":"ContainerStarted","Data":"90a4023d367f2f7cd5c58cace6802cdeb330866915cbc606d9e37c1d766dfaea"} Jan 28 15:33:40 crc kubenswrapper[4871]: I0128 15:33:40.087922 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z7s5x" podStartSLOduration=1.09849312 podStartE2EDuration="9.087905904s" podCreationTimestamp="2026-01-28 15:33:31 +0000 UTC" firstStartedPulling="2026-01-28 15:33:31.960862497 +0000 UTC m=+963.856700829" lastFinishedPulling="2026-01-28 15:33:39.950275291 +0000 UTC m=+971.846113613" observedRunningTime="2026-01-28 15:33:40.085920731 +0000 UTC m=+971.981759053" watchObservedRunningTime="2026-01-28 15:33:40.087905904 +0000 UTC m=+971.983744226" Jan 28 15:33:42 crc kubenswrapper[4871]: I0128 15:33:42.974044 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-7p7tg"] Jan 28 15:33:42 crc kubenswrapper[4871]: I0128 15:33:42.975888 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-7p7tg" Jan 28 15:33:42 crc kubenswrapper[4871]: I0128 15:33:42.977949 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 28 15:33:42 crc kubenswrapper[4871]: I0128 15:33:42.978310 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 28 15:33:42 crc kubenswrapper[4871]: I0128 15:33:42.978488 4871 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-rb52h" Jan 28 15:33:42 crc kubenswrapper[4871]: I0128 15:33:42.994619 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-7p7tg"] Jan 28 15:33:43 crc kubenswrapper[4871]: I0128 15:33:43.069274 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/963e85a0-b8b7-41d3-89cd-437e0cbec396-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-7p7tg\" (UID: \"963e85a0-b8b7-41d3-89cd-437e0cbec396\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-7p7tg" Jan 28 15:33:43 crc kubenswrapper[4871]: I0128 15:33:43.069321 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghcn5\" (UniqueName: \"kubernetes.io/projected/963e85a0-b8b7-41d3-89cd-437e0cbec396-kube-api-access-ghcn5\") pod \"cert-manager-webhook-f4fb5df64-7p7tg\" (UID: \"963e85a0-b8b7-41d3-89cd-437e0cbec396\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-7p7tg" Jan 28 15:33:43 crc kubenswrapper[4871]: I0128 15:33:43.170187 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/963e85a0-b8b7-41d3-89cd-437e0cbec396-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-7p7tg\" (UID: \"963e85a0-b8b7-41d3-89cd-437e0cbec396\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-7p7tg" Jan 28 15:33:43 crc kubenswrapper[4871]: I0128 15:33:43.170542 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghcn5\" (UniqueName: \"kubernetes.io/projected/963e85a0-b8b7-41d3-89cd-437e0cbec396-kube-api-access-ghcn5\") pod \"cert-manager-webhook-f4fb5df64-7p7tg\" (UID: \"963e85a0-b8b7-41d3-89cd-437e0cbec396\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-7p7tg" Jan 28 15:33:43 crc kubenswrapper[4871]: I0128 15:33:43.189214 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/963e85a0-b8b7-41d3-89cd-437e0cbec396-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-7p7tg\" (UID: \"963e85a0-b8b7-41d3-89cd-437e0cbec396\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-7p7tg" Jan 28 15:33:43 crc kubenswrapper[4871]: I0128 15:33:43.190726 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghcn5\" (UniqueName: \"kubernetes.io/projected/963e85a0-b8b7-41d3-89cd-437e0cbec396-kube-api-access-ghcn5\") pod \"cert-manager-webhook-f4fb5df64-7p7tg\" (UID: \"963e85a0-b8b7-41d3-89cd-437e0cbec396\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-7p7tg" Jan 28 15:33:43 crc kubenswrapper[4871]: I0128 15:33:43.300968 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-7p7tg" Jan 28 15:33:43 crc kubenswrapper[4871]: I0128 15:33:43.509638 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-7p7tg"] Jan 28 15:33:44 crc kubenswrapper[4871]: I0128 15:33:44.093194 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-7p7tg" event={"ID":"963e85a0-b8b7-41d3-89cd-437e0cbec396","Type":"ContainerStarted","Data":"d183b913d4b44fbde1b6b58462c47f065042a715d4a9ff2457d6af92eb7f688a"} Jan 28 15:33:46 crc kubenswrapper[4871]: I0128 15:33:46.114995 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-bm7cr"] Jan 28 15:33:46 crc kubenswrapper[4871]: I0128 15:33:46.116014 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-bm7cr" Jan 28 15:33:46 crc kubenswrapper[4871]: I0128 15:33:46.128002 4871 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-9l6m7" Jan 28 15:33:46 crc kubenswrapper[4871]: I0128 15:33:46.131831 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-bm7cr"] Jan 28 15:33:46 crc kubenswrapper[4871]: I0128 15:33:46.230090 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpkgk\" (UniqueName: \"kubernetes.io/projected/ea8a32e9-2b85-4260-a753-b4379145d43f-kube-api-access-lpkgk\") pod \"cert-manager-cainjector-855d9ccff4-bm7cr\" (UID: \"ea8a32e9-2b85-4260-a753-b4379145d43f\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-bm7cr" Jan 28 15:33:46 crc kubenswrapper[4871]: I0128 15:33:46.230235 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea8a32e9-2b85-4260-a753-b4379145d43f-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-bm7cr\" (UID: \"ea8a32e9-2b85-4260-a753-b4379145d43f\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-bm7cr" Jan 28 15:33:46 crc kubenswrapper[4871]: I0128 15:33:46.332143 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpkgk\" (UniqueName: \"kubernetes.io/projected/ea8a32e9-2b85-4260-a753-b4379145d43f-kube-api-access-lpkgk\") pod \"cert-manager-cainjector-855d9ccff4-bm7cr\" (UID: \"ea8a32e9-2b85-4260-a753-b4379145d43f\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-bm7cr" Jan 28 15:33:46 crc kubenswrapper[4871]: I0128 15:33:46.332252 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea8a32e9-2b85-4260-a753-b4379145d43f-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-bm7cr\" (UID: \"ea8a32e9-2b85-4260-a753-b4379145d43f\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-bm7cr" Jan 28 15:33:46 crc kubenswrapper[4871]: I0128 15:33:46.351746 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea8a32e9-2b85-4260-a753-b4379145d43f-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-bm7cr\" (UID: \"ea8a32e9-2b85-4260-a753-b4379145d43f\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-bm7cr" Jan 28 15:33:46 crc kubenswrapper[4871]: I0128 15:33:46.374164 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpkgk\" (UniqueName: \"kubernetes.io/projected/ea8a32e9-2b85-4260-a753-b4379145d43f-kube-api-access-lpkgk\") pod \"cert-manager-cainjector-855d9ccff4-bm7cr\" (UID: \"ea8a32e9-2b85-4260-a753-b4379145d43f\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-bm7cr" Jan 28 15:33:46 crc kubenswrapper[4871]: I0128 15:33:46.443519 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-bm7cr" Jan 28 15:33:46 crc kubenswrapper[4871]: I0128 15:33:46.770598 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-bm7cr"] Jan 28 15:33:47 crc kubenswrapper[4871]: I0128 15:33:47.118831 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-bm7cr" event={"ID":"ea8a32e9-2b85-4260-a753-b4379145d43f","Type":"ContainerStarted","Data":"8ba855ea7d168fc959bfabf9b5856f18cac2db572b749a32b0107d866d50cf52"} Jan 28 15:33:54 crc kubenswrapper[4871]: I0128 15:33:54.158228 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-bm7cr" event={"ID":"ea8a32e9-2b85-4260-a753-b4379145d43f","Type":"ContainerStarted","Data":"22b27839e9fb168d98e5302a99ecf6f052621046858e07ea024197da62c587cc"} Jan 28 15:33:54 crc kubenswrapper[4871]: I0128 15:33:54.159652 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-7p7tg" event={"ID":"963e85a0-b8b7-41d3-89cd-437e0cbec396","Type":"ContainerStarted","Data":"9e8915286dd39a708e266c2a2cecaa1902757633b0b87125676de50a8fc90586"} Jan 28 15:33:54 crc kubenswrapper[4871]: I0128 15:33:54.159809 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-7p7tg" Jan 28 15:33:54 crc kubenswrapper[4871]: I0128 15:33:54.181900 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-bm7cr" podStartSLOduration=2.012170372 podStartE2EDuration="8.181876745s" podCreationTimestamp="2026-01-28 15:33:46 +0000 UTC" firstStartedPulling="2026-01-28 15:33:46.790082868 +0000 UTC m=+978.685921190" lastFinishedPulling="2026-01-28 15:33:52.959789241 +0000 UTC m=+984.855627563" observedRunningTime="2026-01-28 15:33:54.17728748 +0000 UTC m=+986.073125812" watchObservedRunningTime="2026-01-28 15:33:54.181876745 +0000 UTC m=+986.077715077" Jan 28 15:33:54 crc kubenswrapper[4871]: I0128 15:33:54.205012 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-7p7tg" podStartSLOduration=2.781368332 podStartE2EDuration="12.204990444s" podCreationTimestamp="2026-01-28 15:33:42 +0000 UTC" firstStartedPulling="2026-01-28 15:33:43.530598023 +0000 UTC m=+975.426436355" lastFinishedPulling="2026-01-28 15:33:52.954220145 +0000 UTC m=+984.850058467" observedRunningTime="2026-01-28 15:33:54.201295697 +0000 UTC m=+986.097134029" watchObservedRunningTime="2026-01-28 15:33:54.204990444 +0000 UTC m=+986.100828766" Jan 28 15:33:58 crc kubenswrapper[4871]: I0128 15:33:58.303385 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-7p7tg" Jan 28 15:34:01 crc kubenswrapper[4871]: I0128 15:34:01.883243 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-l9bx2"] Jan 28 15:34:01 crc kubenswrapper[4871]: I0128 15:34:01.884243 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-l9bx2" Jan 28 15:34:01 crc kubenswrapper[4871]: I0128 15:34:01.890314 4871 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-zdt5r" Jan 28 15:34:01 crc kubenswrapper[4871]: I0128 15:34:01.908641 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-l9bx2"] Jan 28 15:34:01 crc kubenswrapper[4871]: I0128 15:34:01.969257 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx5rj\" (UniqueName: \"kubernetes.io/projected/cd58f6b5-07a7-4bf5-adc6-db2b4d0e021b-kube-api-access-bx5rj\") pod \"cert-manager-86cb77c54b-l9bx2\" (UID: \"cd58f6b5-07a7-4bf5-adc6-db2b4d0e021b\") " pod="cert-manager/cert-manager-86cb77c54b-l9bx2" Jan 28 15:34:01 crc kubenswrapper[4871]: I0128 15:34:01.969301 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd58f6b5-07a7-4bf5-adc6-db2b4d0e021b-bound-sa-token\") pod \"cert-manager-86cb77c54b-l9bx2\" (UID: \"cd58f6b5-07a7-4bf5-adc6-db2b4d0e021b\") " pod="cert-manager/cert-manager-86cb77c54b-l9bx2" Jan 28 15:34:02 crc kubenswrapper[4871]: I0128 15:34:02.070914 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx5rj\" (UniqueName: \"kubernetes.io/projected/cd58f6b5-07a7-4bf5-adc6-db2b4d0e021b-kube-api-access-bx5rj\") pod \"cert-manager-86cb77c54b-l9bx2\" (UID: \"cd58f6b5-07a7-4bf5-adc6-db2b4d0e021b\") " pod="cert-manager/cert-manager-86cb77c54b-l9bx2" Jan 28 15:34:02 crc kubenswrapper[4871]: I0128 15:34:02.070973 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd58f6b5-07a7-4bf5-adc6-db2b4d0e021b-bound-sa-token\") pod \"cert-manager-86cb77c54b-l9bx2\" (UID: \"cd58f6b5-07a7-4bf5-adc6-db2b4d0e021b\") " pod="cert-manager/cert-manager-86cb77c54b-l9bx2" Jan 28 15:34:02 crc kubenswrapper[4871]: I0128 15:34:02.103557 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd58f6b5-07a7-4bf5-adc6-db2b4d0e021b-bound-sa-token\") pod \"cert-manager-86cb77c54b-l9bx2\" (UID: \"cd58f6b5-07a7-4bf5-adc6-db2b4d0e021b\") " pod="cert-manager/cert-manager-86cb77c54b-l9bx2" Jan 28 15:34:02 crc kubenswrapper[4871]: I0128 15:34:02.108289 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx5rj\" (UniqueName: \"kubernetes.io/projected/cd58f6b5-07a7-4bf5-adc6-db2b4d0e021b-kube-api-access-bx5rj\") pod \"cert-manager-86cb77c54b-l9bx2\" (UID: \"cd58f6b5-07a7-4bf5-adc6-db2b4d0e021b\") " pod="cert-manager/cert-manager-86cb77c54b-l9bx2" Jan 28 15:34:02 crc kubenswrapper[4871]: I0128 15:34:02.201410 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-l9bx2" Jan 28 15:34:02 crc kubenswrapper[4871]: I0128 15:34:02.389639 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-l9bx2"] Jan 28 15:34:02 crc kubenswrapper[4871]: W0128 15:34:02.394087 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd58f6b5_07a7_4bf5_adc6_db2b4d0e021b.slice/crio-4c2068100ae5062926a6896b8cfffad2a57f0812a6ee841b38979a3e07e9cc43 WatchSource:0}: Error finding container 4c2068100ae5062926a6896b8cfffad2a57f0812a6ee841b38979a3e07e9cc43: Status 404 returned error can't find the container with id 4c2068100ae5062926a6896b8cfffad2a57f0812a6ee841b38979a3e07e9cc43 Jan 28 15:34:03 crc kubenswrapper[4871]: I0128 15:34:03.219957 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-l9bx2" event={"ID":"cd58f6b5-07a7-4bf5-adc6-db2b4d0e021b","Type":"ContainerStarted","Data":"60397fb10859f0ca83f320838d7e6a175315d77a235526eb05ea7d9df0470dca"} Jan 28 15:34:03 crc kubenswrapper[4871]: I0128 15:34:03.220434 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-l9bx2" event={"ID":"cd58f6b5-07a7-4bf5-adc6-db2b4d0e021b","Type":"ContainerStarted","Data":"4c2068100ae5062926a6896b8cfffad2a57f0812a6ee841b38979a3e07e9cc43"} Jan 28 15:34:03 crc kubenswrapper[4871]: I0128 15:34:03.255750 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-l9bx2" podStartSLOduration=2.255712689 podStartE2EDuration="2.255712689s" podCreationTimestamp="2026-01-28 15:34:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:34:03.244996711 +0000 UTC m=+995.140835043" watchObservedRunningTime="2026-01-28 15:34:03.255712689 +0000 UTC m=+995.151551051" Jan 28 15:34:13 crc kubenswrapper[4871]: I0128 15:34:13.813734 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:34:13 crc kubenswrapper[4871]: I0128 15:34:13.814349 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:34:14 crc kubenswrapper[4871]: I0128 15:34:14.849895 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-28hwf"] Jan 28 15:34:14 crc kubenswrapper[4871]: I0128 15:34:14.851148 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-28hwf" Jan 28 15:34:14 crc kubenswrapper[4871]: I0128 15:34:14.853666 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-9dm2s" Jan 28 15:34:14 crc kubenswrapper[4871]: I0128 15:34:14.853739 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 28 15:34:14 crc kubenswrapper[4871]: I0128 15:34:14.855549 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 28 15:34:14 crc kubenswrapper[4871]: I0128 15:34:14.870625 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-28hwf"] Jan 28 15:34:14 crc kubenswrapper[4871]: I0128 15:34:14.961179 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td8hz\" (UniqueName: \"kubernetes.io/projected/69c2316a-7c90-4a8d-905e-d1499d6dee39-kube-api-access-td8hz\") pod \"openstack-operator-index-28hwf\" (UID: \"69c2316a-7c90-4a8d-905e-d1499d6dee39\") " pod="openstack-operators/openstack-operator-index-28hwf" Jan 28 15:34:15 crc kubenswrapper[4871]: I0128 15:34:15.063141 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td8hz\" (UniqueName: \"kubernetes.io/projected/69c2316a-7c90-4a8d-905e-d1499d6dee39-kube-api-access-td8hz\") pod \"openstack-operator-index-28hwf\" (UID: \"69c2316a-7c90-4a8d-905e-d1499d6dee39\") " pod="openstack-operators/openstack-operator-index-28hwf" Jan 28 15:34:15 crc kubenswrapper[4871]: I0128 15:34:15.087058 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td8hz\" (UniqueName: \"kubernetes.io/projected/69c2316a-7c90-4a8d-905e-d1499d6dee39-kube-api-access-td8hz\") pod \"openstack-operator-index-28hwf\" (UID: \"69c2316a-7c90-4a8d-905e-d1499d6dee39\") " pod="openstack-operators/openstack-operator-index-28hwf" Jan 28 15:34:15 crc kubenswrapper[4871]: I0128 15:34:15.175639 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-28hwf" Jan 28 15:34:16 crc kubenswrapper[4871]: I0128 15:34:16.157930 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-28hwf"] Jan 28 15:34:16 crc kubenswrapper[4871]: I0128 15:34:16.318792 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-28hwf" event={"ID":"69c2316a-7c90-4a8d-905e-d1499d6dee39","Type":"ContainerStarted","Data":"8dbfb408f5ec79d0f1dc44c226db50fbf73ed57323c41d5aaeb895916063bcbc"} Jan 28 15:34:36 crc kubenswrapper[4871]: I0128 15:34:36.472875 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-28hwf" event={"ID":"69c2316a-7c90-4a8d-905e-d1499d6dee39","Type":"ContainerStarted","Data":"5f5012f76d97006dea502767e4ef221c95d675efd94b8e748d3ebbc7ffdc7f1e"} Jan 28 15:34:36 crc kubenswrapper[4871]: I0128 15:34:36.509987 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-28hwf" podStartSLOduration=2.685562758 podStartE2EDuration="22.509951287s" podCreationTimestamp="2026-01-28 15:34:14 +0000 UTC" firstStartedPulling="2026-01-28 15:34:16.16942377 +0000 UTC m=+1008.065262092" lastFinishedPulling="2026-01-28 15:34:35.993812269 +0000 UTC m=+1027.889650621" observedRunningTime="2026-01-28 15:34:36.494795578 +0000 UTC m=+1028.390633940" watchObservedRunningTime="2026-01-28 15:34:36.509951287 +0000 UTC m=+1028.405789699" Jan 28 15:34:43 crc kubenswrapper[4871]: I0128 15:34:43.814156 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:34:43 crc kubenswrapper[4871]: I0128 15:34:43.814906 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:34:45 crc kubenswrapper[4871]: I0128 15:34:45.176346 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-28hwf" Jan 28 15:34:45 crc kubenswrapper[4871]: I0128 15:34:45.176501 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-28hwf" Jan 28 15:34:45 crc kubenswrapper[4871]: I0128 15:34:45.210636 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-28hwf" Jan 28 15:34:45 crc kubenswrapper[4871]: I0128 15:34:45.561365 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-28hwf" Jan 28 15:35:02 crc kubenswrapper[4871]: I0128 15:35:02.333512 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf"] Jan 28 15:35:02 crc kubenswrapper[4871]: I0128 15:35:02.335243 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf" Jan 28 15:35:02 crc kubenswrapper[4871]: I0128 15:35:02.341502 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf"] Jan 28 15:35:02 crc kubenswrapper[4871]: I0128 15:35:02.341629 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-28gfp" Jan 28 15:35:02 crc kubenswrapper[4871]: I0128 15:35:02.444566 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93d9bbfd-0623-49fd-979d-4be49534ad36-bundle\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf\" (UID: \"93d9bbfd-0623-49fd-979d-4be49534ad36\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf" Jan 28 15:35:02 crc kubenswrapper[4871]: I0128 15:35:02.444667 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93d9bbfd-0623-49fd-979d-4be49534ad36-util\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf\" (UID: \"93d9bbfd-0623-49fd-979d-4be49534ad36\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf" Jan 28 15:35:02 crc kubenswrapper[4871]: I0128 15:35:02.444726 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm28r\" (UniqueName: \"kubernetes.io/projected/93d9bbfd-0623-49fd-979d-4be49534ad36-kube-api-access-vm28r\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf\" (UID: \"93d9bbfd-0623-49fd-979d-4be49534ad36\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf" Jan 28 15:35:02 crc kubenswrapper[4871]: I0128 15:35:02.546740 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm28r\" (UniqueName: \"kubernetes.io/projected/93d9bbfd-0623-49fd-979d-4be49534ad36-kube-api-access-vm28r\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf\" (UID: \"93d9bbfd-0623-49fd-979d-4be49534ad36\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf" Jan 28 15:35:02 crc kubenswrapper[4871]: I0128 15:35:02.547083 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93d9bbfd-0623-49fd-979d-4be49534ad36-bundle\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf\" (UID: \"93d9bbfd-0623-49fd-979d-4be49534ad36\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf" Jan 28 15:35:02 crc kubenswrapper[4871]: I0128 15:35:02.547561 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93d9bbfd-0623-49fd-979d-4be49534ad36-bundle\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf\" (UID: \"93d9bbfd-0623-49fd-979d-4be49534ad36\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf" Jan 28 15:35:02 crc kubenswrapper[4871]: I0128 15:35:02.547668 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93d9bbfd-0623-49fd-979d-4be49534ad36-util\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf\" (UID: \"93d9bbfd-0623-49fd-979d-4be49534ad36\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf" Jan 28 15:35:02 crc kubenswrapper[4871]: I0128 15:35:02.547919 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93d9bbfd-0623-49fd-979d-4be49534ad36-util\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf\" (UID: \"93d9bbfd-0623-49fd-979d-4be49534ad36\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf" Jan 28 15:35:02 crc kubenswrapper[4871]: I0128 15:35:02.566378 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm28r\" (UniqueName: \"kubernetes.io/projected/93d9bbfd-0623-49fd-979d-4be49534ad36-kube-api-access-vm28r\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf\" (UID: \"93d9bbfd-0623-49fd-979d-4be49534ad36\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf" Jan 28 15:35:02 crc kubenswrapper[4871]: I0128 15:35:02.697351 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf" Jan 28 15:35:02 crc kubenswrapper[4871]: I0128 15:35:02.944306 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf"] Jan 28 15:35:03 crc kubenswrapper[4871]: I0128 15:35:03.700360 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf" event={"ID":"93d9bbfd-0623-49fd-979d-4be49534ad36","Type":"ContainerStarted","Data":"723f228580e87e50b1c87e6895aab1e9788a6f73f989376e66aefdd9abce2a24"} Jan 28 15:35:03 crc kubenswrapper[4871]: I0128 15:35:03.700701 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf" event={"ID":"93d9bbfd-0623-49fd-979d-4be49534ad36","Type":"ContainerStarted","Data":"e18ba0f54850f1d8e5f7bd7c9371f9dec7994456bd034ab3a01f5eac3caac80d"} Jan 28 15:35:04 crc kubenswrapper[4871]: I0128 15:35:04.709776 4871 generic.go:334] "Generic (PLEG): container finished" podID="93d9bbfd-0623-49fd-979d-4be49534ad36" containerID="723f228580e87e50b1c87e6895aab1e9788a6f73f989376e66aefdd9abce2a24" exitCode=0 Jan 28 15:35:04 crc kubenswrapper[4871]: I0128 15:35:04.709832 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf" event={"ID":"93d9bbfd-0623-49fd-979d-4be49534ad36","Type":"ContainerDied","Data":"723f228580e87e50b1c87e6895aab1e9788a6f73f989376e66aefdd9abce2a24"} Jan 28 15:35:08 crc kubenswrapper[4871]: I0128 15:35:08.742282 4871 generic.go:334] "Generic (PLEG): container finished" podID="93d9bbfd-0623-49fd-979d-4be49534ad36" containerID="15f597206fc941ce5aa3ac20a68e42a2af4dd961c032035efcdb7b21aee3fff6" exitCode=0 Jan 28 15:35:08 crc kubenswrapper[4871]: I0128 15:35:08.742354 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf" event={"ID":"93d9bbfd-0623-49fd-979d-4be49534ad36","Type":"ContainerDied","Data":"15f597206fc941ce5aa3ac20a68e42a2af4dd961c032035efcdb7b21aee3fff6"} Jan 28 15:35:09 crc kubenswrapper[4871]: I0128 15:35:09.753673 4871 generic.go:334] "Generic (PLEG): container finished" podID="93d9bbfd-0623-49fd-979d-4be49534ad36" containerID="fdf6e9a42776faa5e27d4b69df0898aa4846c6a742d56751ef5a9fb38ad809c6" exitCode=0 Jan 28 15:35:09 crc kubenswrapper[4871]: I0128 15:35:09.753778 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf" event={"ID":"93d9bbfd-0623-49fd-979d-4be49534ad36","Type":"ContainerDied","Data":"fdf6e9a42776faa5e27d4b69df0898aa4846c6a742d56751ef5a9fb38ad809c6"} Jan 28 15:35:11 crc kubenswrapper[4871]: I0128 15:35:11.006919 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf" Jan 28 15:35:11 crc kubenswrapper[4871]: I0128 15:35:11.174938 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93d9bbfd-0623-49fd-979d-4be49534ad36-util\") pod \"93d9bbfd-0623-49fd-979d-4be49534ad36\" (UID: \"93d9bbfd-0623-49fd-979d-4be49534ad36\") " Jan 28 15:35:11 crc kubenswrapper[4871]: I0128 15:35:11.174994 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93d9bbfd-0623-49fd-979d-4be49534ad36-bundle\") pod \"93d9bbfd-0623-49fd-979d-4be49534ad36\" (UID: \"93d9bbfd-0623-49fd-979d-4be49534ad36\") " Jan 28 15:35:11 crc kubenswrapper[4871]: I0128 15:35:11.175057 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm28r\" (UniqueName: \"kubernetes.io/projected/93d9bbfd-0623-49fd-979d-4be49534ad36-kube-api-access-vm28r\") pod \"93d9bbfd-0623-49fd-979d-4be49534ad36\" (UID: \"93d9bbfd-0623-49fd-979d-4be49534ad36\") " Jan 28 15:35:11 crc kubenswrapper[4871]: I0128 15:35:11.176087 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93d9bbfd-0623-49fd-979d-4be49534ad36-bundle" (OuterVolumeSpecName: "bundle") pod "93d9bbfd-0623-49fd-979d-4be49534ad36" (UID: "93d9bbfd-0623-49fd-979d-4be49534ad36"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:35:11 crc kubenswrapper[4871]: I0128 15:35:11.181264 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93d9bbfd-0623-49fd-979d-4be49534ad36-kube-api-access-vm28r" (OuterVolumeSpecName: "kube-api-access-vm28r") pod "93d9bbfd-0623-49fd-979d-4be49534ad36" (UID: "93d9bbfd-0623-49fd-979d-4be49534ad36"). InnerVolumeSpecName "kube-api-access-vm28r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:35:11 crc kubenswrapper[4871]: I0128 15:35:11.196824 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93d9bbfd-0623-49fd-979d-4be49534ad36-util" (OuterVolumeSpecName: "util") pod "93d9bbfd-0623-49fd-979d-4be49534ad36" (UID: "93d9bbfd-0623-49fd-979d-4be49534ad36"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:35:11 crc kubenswrapper[4871]: I0128 15:35:11.276799 4871 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93d9bbfd-0623-49fd-979d-4be49534ad36-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:35:11 crc kubenswrapper[4871]: I0128 15:35:11.276830 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm28r\" (UniqueName: \"kubernetes.io/projected/93d9bbfd-0623-49fd-979d-4be49534ad36-kube-api-access-vm28r\") on node \"crc\" DevicePath \"\"" Jan 28 15:35:11 crc kubenswrapper[4871]: I0128 15:35:11.276843 4871 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93d9bbfd-0623-49fd-979d-4be49534ad36-util\") on node \"crc\" DevicePath \"\"" Jan 28 15:35:11 crc kubenswrapper[4871]: I0128 15:35:11.774019 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf" event={"ID":"93d9bbfd-0623-49fd-979d-4be49534ad36","Type":"ContainerDied","Data":"e18ba0f54850f1d8e5f7bd7c9371f9dec7994456bd034ab3a01f5eac3caac80d"} Jan 28 15:35:11 crc kubenswrapper[4871]: I0128 15:35:11.774078 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e18ba0f54850f1d8e5f7bd7c9371f9dec7994456bd034ab3a01f5eac3caac80d" Jan 28 15:35:11 crc kubenswrapper[4871]: I0128 15:35:11.774572 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf" Jan 28 15:35:13 crc kubenswrapper[4871]: I0128 15:35:13.813411 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:35:13 crc kubenswrapper[4871]: I0128 15:35:13.813753 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:35:13 crc kubenswrapper[4871]: I0128 15:35:13.813795 4871 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 15:35:13 crc kubenswrapper[4871]: I0128 15:35:13.814226 4871 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"504ae14f7e055da72d55c5a96bfc70153a17e45dce0bb3e15ce3dccb6926e332"} pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:35:13 crc kubenswrapper[4871]: I0128 15:35:13.814271 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" containerID="cri-o://504ae14f7e055da72d55c5a96bfc70153a17e45dce0bb3e15ce3dccb6926e332" gracePeriod=600 Jan 28 15:35:14 crc kubenswrapper[4871]: I0128 15:35:14.346130 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-678d9cfb88-shfdl"] Jan 28 15:35:14 crc kubenswrapper[4871]: E0128 15:35:14.346961 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d9bbfd-0623-49fd-979d-4be49534ad36" containerName="extract" Jan 28 15:35:14 crc kubenswrapper[4871]: I0128 15:35:14.346987 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d9bbfd-0623-49fd-979d-4be49534ad36" containerName="extract" Jan 28 15:35:14 crc kubenswrapper[4871]: E0128 15:35:14.347001 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d9bbfd-0623-49fd-979d-4be49534ad36" containerName="util" Jan 28 15:35:14 crc kubenswrapper[4871]: I0128 15:35:14.347009 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d9bbfd-0623-49fd-979d-4be49534ad36" containerName="util" Jan 28 15:35:14 crc kubenswrapper[4871]: E0128 15:35:14.347024 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d9bbfd-0623-49fd-979d-4be49534ad36" containerName="pull" Jan 28 15:35:14 crc kubenswrapper[4871]: I0128 15:35:14.347032 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d9bbfd-0623-49fd-979d-4be49534ad36" containerName="pull" Jan 28 15:35:14 crc kubenswrapper[4871]: I0128 15:35:14.347175 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d9bbfd-0623-49fd-979d-4be49534ad36" containerName="extract" Jan 28 15:35:14 crc kubenswrapper[4871]: I0128 15:35:14.347697 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-shfdl" Jan 28 15:35:14 crc kubenswrapper[4871]: I0128 15:35:14.349841 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-xmmrx" Jan 28 15:35:14 crc kubenswrapper[4871]: I0128 15:35:14.373359 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-678d9cfb88-shfdl"] Jan 28 15:35:14 crc kubenswrapper[4871]: I0128 15:35:14.520003 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgrff\" (UniqueName: \"kubernetes.io/projected/53e9d66f-621b-4d3e-a96a-1c330a53643b-kube-api-access-zgrff\") pod \"openstack-operator-controller-init-678d9cfb88-shfdl\" (UID: \"53e9d66f-621b-4d3e-a96a-1c330a53643b\") " pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-shfdl" Jan 28 15:35:14 crc kubenswrapper[4871]: I0128 15:35:14.621715 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgrff\" (UniqueName: \"kubernetes.io/projected/53e9d66f-621b-4d3e-a96a-1c330a53643b-kube-api-access-zgrff\") pod \"openstack-operator-controller-init-678d9cfb88-shfdl\" (UID: \"53e9d66f-621b-4d3e-a96a-1c330a53643b\") " pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-shfdl" Jan 28 15:35:14 crc kubenswrapper[4871]: I0128 15:35:14.664758 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgrff\" (UniqueName: \"kubernetes.io/projected/53e9d66f-621b-4d3e-a96a-1c330a53643b-kube-api-access-zgrff\") pod \"openstack-operator-controller-init-678d9cfb88-shfdl\" (UID: \"53e9d66f-621b-4d3e-a96a-1c330a53643b\") " pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-shfdl" Jan 28 15:35:14 crc kubenswrapper[4871]: I0128 15:35:14.670037 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-shfdl" Jan 28 15:35:14 crc kubenswrapper[4871]: I0128 15:35:14.807652 4871 generic.go:334] "Generic (PLEG): container finished" podID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerID="504ae14f7e055da72d55c5a96bfc70153a17e45dce0bb3e15ce3dccb6926e332" exitCode=0 Jan 28 15:35:14 crc kubenswrapper[4871]: I0128 15:35:14.807729 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerDied","Data":"504ae14f7e055da72d55c5a96bfc70153a17e45dce0bb3e15ce3dccb6926e332"} Jan 28 15:35:14 crc kubenswrapper[4871]: I0128 15:35:14.808021 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerStarted","Data":"68a4d17dddc2aafde953ccd496bb4751f12ecdd49f81aa1c0d30f071f672c508"} Jan 28 15:35:14 crc kubenswrapper[4871]: I0128 15:35:14.808064 4871 scope.go:117] "RemoveContainer" containerID="99a5b6a3a56a0129d3e0910f8bee719a8f441dd30871a64b674173f9c9123f18" Jan 28 15:35:15 crc kubenswrapper[4871]: I0128 15:35:15.073767 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-678d9cfb88-shfdl"] Jan 28 15:35:15 crc kubenswrapper[4871]: I0128 15:35:15.079884 4871 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:35:15 crc kubenswrapper[4871]: I0128 15:35:15.814301 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-shfdl" event={"ID":"53e9d66f-621b-4d3e-a96a-1c330a53643b","Type":"ContainerStarted","Data":"b60fc195848512a96354667ea397df013e3c58a4cd4ec183f783b6c4ab63b58a"} Jan 28 15:35:19 crc kubenswrapper[4871]: I0128 15:35:19.849022 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-shfdl" event={"ID":"53e9d66f-621b-4d3e-a96a-1c330a53643b","Type":"ContainerStarted","Data":"b7a553c87d6801eaf079fd298ec711e0a73ab5b4d5aa4660a98c0999c4d1e934"} Jan 28 15:35:19 crc kubenswrapper[4871]: I0128 15:35:19.849699 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-shfdl" Jan 28 15:35:19 crc kubenswrapper[4871]: I0128 15:35:19.876939 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-shfdl" podStartSLOduration=1.707637932 podStartE2EDuration="5.876918308s" podCreationTimestamp="2026-01-28 15:35:14 +0000 UTC" firstStartedPulling="2026-01-28 15:35:15.079540922 +0000 UTC m=+1066.975379244" lastFinishedPulling="2026-01-28 15:35:19.248821298 +0000 UTC m=+1071.144659620" observedRunningTime="2026-01-28 15:35:19.872606522 +0000 UTC m=+1071.768444844" watchObservedRunningTime="2026-01-28 15:35:19.876918308 +0000 UTC m=+1071.772756630" Jan 28 15:35:24 crc kubenswrapper[4871]: I0128 15:35:24.675955 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-shfdl" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.297917 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hkk6l"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.299544 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hkk6l" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.301339 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-c75wh" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.303884 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hkk6l"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.313146 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-f6487bd57-tb9cx"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.313884 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-tb9cx" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.330575 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-lfdnf" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.332011 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-f6487bd57-tb9cx"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.342305 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66dfbd6f5d-vjjlf"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.343311 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-vjjlf" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.345836 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-vpjkp" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.359658 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66dfbd6f5d-vjjlf"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.374893 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-587c6bfdcf-ngtrg"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.378845 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-ngtrg" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.383925 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-kdkq7" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.394786 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq8t4\" (UniqueName: \"kubernetes.io/projected/4f068c70-d72a-4582-96e6-891b7269b1ba-kube-api-access-gq8t4\") pod \"cinder-operator-controller-manager-f6487bd57-tb9cx\" (UID: \"4f068c70-d72a-4582-96e6-891b7269b1ba\") " pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-tb9cx" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.394850 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkfmn\" (UniqueName: \"kubernetes.io/projected/74c0f096-51ac-459a-b9f2-a7cb7f462734-kube-api-access-gkfmn\") pod \"heat-operator-controller-manager-587c6bfdcf-ngtrg\" (UID: \"74c0f096-51ac-459a-b9f2-a7cb7f462734\") " pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-ngtrg" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.394876 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps9vc\" (UniqueName: \"kubernetes.io/projected/270211d8-fb57-4cb0-ba0b-9de5ae660e2e-kube-api-access-ps9vc\") pod \"barbican-operator-controller-manager-6bc7f4f4cf-hkk6l\" (UID: \"270211d8-fb57-4cb0-ba0b-9de5ae660e2e\") " pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hkk6l" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.394929 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd2zq\" (UniqueName: \"kubernetes.io/projected/95211b62-9193-4fe4-b851-fe46793fac5b-kube-api-access-vd2zq\") pod \"designate-operator-controller-manager-66dfbd6f5d-vjjlf\" (UID: \"95211b62-9193-4fe4-b851-fe46793fac5b\") " pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-vjjlf" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.402265 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-587c6bfdcf-ngtrg"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.414651 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-mvh4p"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.415710 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-mvh4p" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.418146 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-n5czr" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.421749 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-6db5dbd896-l6xx2"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.422683 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-l6xx2" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.426731 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-mvh4p"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.429978 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-tcz7c" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.441028 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-vnt6l"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.441824 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vnt6l" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.451115 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-jxlg7" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.451126 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.460673 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6db5dbd896-l6xx2"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.470372 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-958664b5-98l68"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.471486 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-958664b5-98l68" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.477258 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-22rd2" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.485660 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-vnt6l"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.495547 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b84b46695-7jmcc"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.496227 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq8t4\" (UniqueName: \"kubernetes.io/projected/4f068c70-d72a-4582-96e6-891b7269b1ba-kube-api-access-gq8t4\") pod \"cinder-operator-controller-manager-f6487bd57-tb9cx\" (UID: \"4f068c70-d72a-4582-96e6-891b7269b1ba\") " pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-tb9cx" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.496274 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkfmn\" (UniqueName: \"kubernetes.io/projected/74c0f096-51ac-459a-b9f2-a7cb7f462734-kube-api-access-gkfmn\") pod \"heat-operator-controller-manager-587c6bfdcf-ngtrg\" (UID: \"74c0f096-51ac-459a-b9f2-a7cb7f462734\") " pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-ngtrg" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.496299 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps9vc\" (UniqueName: \"kubernetes.io/projected/270211d8-fb57-4cb0-ba0b-9de5ae660e2e-kube-api-access-ps9vc\") pod \"barbican-operator-controller-manager-6bc7f4f4cf-hkk6l\" (UID: \"270211d8-fb57-4cb0-ba0b-9de5ae660e2e\") " pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hkk6l" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.496358 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd2zq\" (UniqueName: \"kubernetes.io/projected/95211b62-9193-4fe4-b851-fe46793fac5b-kube-api-access-vd2zq\") pod \"designate-operator-controller-manager-66dfbd6f5d-vjjlf\" (UID: \"95211b62-9193-4fe4-b851-fe46793fac5b\") " pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-vjjlf" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.496501 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-7jmcc" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.508005 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-b92zz" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.517334 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b84b46695-7jmcc"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.522675 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-765668569f-4ncq7"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.523682 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-765668569f-4ncq7" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.530067 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rtt8s" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.533850 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-958664b5-98l68"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.533909 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-kh8pk"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.536445 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps9vc\" (UniqueName: \"kubernetes.io/projected/270211d8-fb57-4cb0-ba0b-9de5ae660e2e-kube-api-access-ps9vc\") pod \"barbican-operator-controller-manager-6bc7f4f4cf-hkk6l\" (UID: \"270211d8-fb57-4cb0-ba0b-9de5ae660e2e\") " pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hkk6l" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.544504 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkfmn\" (UniqueName: \"kubernetes.io/projected/74c0f096-51ac-459a-b9f2-a7cb7f462734-kube-api-access-gkfmn\") pod \"heat-operator-controller-manager-587c6bfdcf-ngtrg\" (UID: \"74c0f096-51ac-459a-b9f2-a7cb7f462734\") " pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-ngtrg" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.552975 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd2zq\" (UniqueName: \"kubernetes.io/projected/95211b62-9193-4fe4-b851-fe46793fac5b-kube-api-access-vd2zq\") pod \"designate-operator-controller-manager-66dfbd6f5d-vjjlf\" (UID: \"95211b62-9193-4fe4-b851-fe46793fac5b\") " pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-vjjlf" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.555636 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq8t4\" (UniqueName: \"kubernetes.io/projected/4f068c70-d72a-4582-96e6-891b7269b1ba-kube-api-access-gq8t4\") pod \"cinder-operator-controller-manager-f6487bd57-tb9cx\" (UID: \"4f068c70-d72a-4582-96e6-891b7269b1ba\") " pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-tb9cx" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.563043 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-765668569f-4ncq7"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.563086 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-kh8pk"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.563102 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-694c5bfc85-cvkg8"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.563351 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-kh8pk" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.563944 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-cvkg8" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.571452 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-tpljx" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.571779 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-dtgzm" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.574440 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-svf5f"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.575302 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-svf5f" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.576843 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-bznl9" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.583208 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-694c5bfc85-cvkg8"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.597456 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rdv8\" (UniqueName: \"kubernetes.io/projected/016e77e5-e2ea-4284-966f-16c5773febce-kube-api-access-2rdv8\") pod \"keystone-operator-controller-manager-7b84b46695-7jmcc\" (UID: \"016e77e5-e2ea-4284-966f-16c5773febce\") " pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-7jmcc" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.597520 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f5zv\" (UniqueName: \"kubernetes.io/projected/91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0-kube-api-access-8f5zv\") pod \"infra-operator-controller-manager-79955696d6-vnt6l\" (UID: \"91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vnt6l" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.597554 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fslk8\" (UniqueName: \"kubernetes.io/projected/b1b132cc-7a24-4a38-bf0b-6d26b36e551b-kube-api-access-fslk8\") pod \"ironic-operator-controller-manager-958664b5-98l68\" (UID: \"b1b132cc-7a24-4a38-bf0b-6d26b36e551b\") " pod="openstack-operators/ironic-operator-controller-manager-958664b5-98l68" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.597610 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zptlb\" (UniqueName: \"kubernetes.io/projected/310a6fc1-965a-4af9-ab12-2c9b2f8046ff-kube-api-access-zptlb\") pod \"glance-operator-controller-manager-6db5dbd896-l6xx2\" (UID: \"310a6fc1-965a-4af9-ab12-2c9b2f8046ff\") " pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-l6xx2" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.597668 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ff7m\" (UniqueName: \"kubernetes.io/projected/41353a5b-bb79-45e7-8135-8229fa386ce4-kube-api-access-5ff7m\") pod \"horizon-operator-controller-manager-5fb775575f-mvh4p\" (UID: \"41353a5b-bb79-45e7-8135-8229fa386ce4\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-mvh4p" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.597695 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0-cert\") pod \"infra-operator-controller-manager-79955696d6-vnt6l\" (UID: \"91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vnt6l" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.603900 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5c765b4558-dz587"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.605071 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-dz587" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.609280 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-rdsjn" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.617315 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hkk6l" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.619290 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-svf5f"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.627288 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5c765b4558-dz587"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.628252 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-tb9cx" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.632343 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-4dvc8"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.633203 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4dvc8" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.635985 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-tjcqk" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.647782 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-lrwg2"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.648827 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-lrwg2" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.650839 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-s77xq" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.657652 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.658819 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.663413 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.663496 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-vp7hg" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.667644 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-vjjlf" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.675101 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-4dvc8"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.689699 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-lrwg2"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.698730 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rdv8\" (UniqueName: \"kubernetes.io/projected/016e77e5-e2ea-4284-966f-16c5773febce-kube-api-access-2rdv8\") pod \"keystone-operator-controller-manager-7b84b46695-7jmcc\" (UID: \"016e77e5-e2ea-4284-966f-16c5773febce\") " pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-7jmcc" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.698775 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f5zv\" (UniqueName: \"kubernetes.io/projected/91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0-kube-api-access-8f5zv\") pod \"infra-operator-controller-manager-79955696d6-vnt6l\" (UID: \"91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vnt6l" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.698801 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fslk8\" (UniqueName: \"kubernetes.io/projected/b1b132cc-7a24-4a38-bf0b-6d26b36e551b-kube-api-access-fslk8\") pod \"ironic-operator-controller-manager-958664b5-98l68\" (UID: \"b1b132cc-7a24-4a38-bf0b-6d26b36e551b\") " pod="openstack-operators/ironic-operator-controller-manager-958664b5-98l68" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.698824 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrqc2\" (UniqueName: \"kubernetes.io/projected/96fd0c1b-c934-4481-b198-38ca2cb9d187-kube-api-access-wrqc2\") pod \"manila-operator-controller-manager-765668569f-4ncq7\" (UID: \"96fd0c1b-c934-4481-b198-38ca2cb9d187\") " pod="openstack-operators/manila-operator-controller-manager-765668569f-4ncq7" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.698854 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg2kz\" (UniqueName: \"kubernetes.io/projected/806e885f-b6fc-4e7c-a81a-31bff54b7b06-kube-api-access-kg2kz\") pod \"nova-operator-controller-manager-ddcbfd695-svf5f\" (UID: \"806e885f-b6fc-4e7c-a81a-31bff54b7b06\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-svf5f" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.698876 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zptlb\" (UniqueName: \"kubernetes.io/projected/310a6fc1-965a-4af9-ab12-2c9b2f8046ff-kube-api-access-zptlb\") pod \"glance-operator-controller-manager-6db5dbd896-l6xx2\" (UID: \"310a6fc1-965a-4af9-ab12-2c9b2f8046ff\") " pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-l6xx2" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.698899 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw9cm\" (UniqueName: \"kubernetes.io/projected/866e339c-74a0-47e8-aff9-3463890568a9-kube-api-access-dw9cm\") pod \"neutron-operator-controller-manager-694c5bfc85-cvkg8\" (UID: \"866e339c-74a0-47e8-aff9-3463890568a9\") " pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-cvkg8" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.698915 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfv4r\" (UniqueName: \"kubernetes.io/projected/a7b94f34-87cf-4992-9480-4019281227c4-kube-api-access-kfv4r\") pod \"mariadb-operator-controller-manager-67bf948998-kh8pk\" (UID: \"a7b94f34-87cf-4992-9480-4019281227c4\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-kh8pk" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.698949 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ff7m\" (UniqueName: \"kubernetes.io/projected/41353a5b-bb79-45e7-8135-8229fa386ce4-kube-api-access-5ff7m\") pod \"horizon-operator-controller-manager-5fb775575f-mvh4p\" (UID: \"41353a5b-bb79-45e7-8135-8229fa386ce4\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-mvh4p" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.698977 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0-cert\") pod \"infra-operator-controller-manager-79955696d6-vnt6l\" (UID: \"91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vnt6l" Jan 28 15:35:42 crc kubenswrapper[4871]: E0128 15:35:42.701835 4871 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:35:42 crc kubenswrapper[4871]: E0128 15:35:42.701939 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0-cert podName:91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0 nodeName:}" failed. No retries permitted until 2026-01-28 15:35:43.201889453 +0000 UTC m=+1095.097727785 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0-cert") pod "infra-operator-controller-manager-79955696d6-vnt6l" (UID: "91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.704097 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-ngtrg" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.705518 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-nrtm8"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.715086 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nrtm8" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.721399 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rdv8\" (UniqueName: \"kubernetes.io/projected/016e77e5-e2ea-4284-966f-16c5773febce-kube-api-access-2rdv8\") pod \"keystone-operator-controller-manager-7b84b46695-7jmcc\" (UID: \"016e77e5-e2ea-4284-966f-16c5773febce\") " pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-7jmcc" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.721688 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ff7m\" (UniqueName: \"kubernetes.io/projected/41353a5b-bb79-45e7-8135-8229fa386ce4-kube-api-access-5ff7m\") pod \"horizon-operator-controller-manager-5fb775575f-mvh4p\" (UID: \"41353a5b-bb79-45e7-8135-8229fa386ce4\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-mvh4p" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.722284 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-4m4jz" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.729775 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fslk8\" (UniqueName: \"kubernetes.io/projected/b1b132cc-7a24-4a38-bf0b-6d26b36e551b-kube-api-access-fslk8\") pod \"ironic-operator-controller-manager-958664b5-98l68\" (UID: \"b1b132cc-7a24-4a38-bf0b-6d26b36e551b\") " pod="openstack-operators/ironic-operator-controller-manager-958664b5-98l68" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.730254 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-nrtm8"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.733133 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f5zv\" (UniqueName: \"kubernetes.io/projected/91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0-kube-api-access-8f5zv\") pod \"infra-operator-controller-manager-79955696d6-vnt6l\" (UID: \"91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vnt6l" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.733691 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zptlb\" (UniqueName: \"kubernetes.io/projected/310a6fc1-965a-4af9-ab12-2c9b2f8046ff-kube-api-access-zptlb\") pod \"glance-operator-controller-manager-6db5dbd896-l6xx2\" (UID: \"310a6fc1-965a-4af9-ab12-2c9b2f8046ff\") " pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-l6xx2" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.733751 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.756916 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-mvh4p" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.760206 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-l6xx2" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.801334 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrqc2\" (UniqueName: \"kubernetes.io/projected/96fd0c1b-c934-4481-b198-38ca2cb9d187-kube-api-access-wrqc2\") pod \"manila-operator-controller-manager-765668569f-4ncq7\" (UID: \"96fd0c1b-c934-4481-b198-38ca2cb9d187\") " pod="openstack-operators/manila-operator-controller-manager-765668569f-4ncq7" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.801401 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt2vz\" (UniqueName: \"kubernetes.io/projected/ad851357-ed69-4ed0-80a4-1de2b1725d37-kube-api-access-nt2vz\") pod \"ovn-operator-controller-manager-788c46999f-4dvc8\" (UID: \"ad851357-ed69-4ed0-80a4-1de2b1725d37\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4dvc8" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.801433 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7b5m\" (UniqueName: \"kubernetes.io/projected/31e7b45a-da4b-4920-895a-d51dba36168e-kube-api-access-x7b5m\") pod \"octavia-operator-controller-manager-5c765b4558-dz587\" (UID: \"31e7b45a-da4b-4920-895a-d51dba36168e\") " pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-dz587" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.801457 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg2kz\" (UniqueName: \"kubernetes.io/projected/806e885f-b6fc-4e7c-a81a-31bff54b7b06-kube-api-access-kg2kz\") pod \"nova-operator-controller-manager-ddcbfd695-svf5f\" (UID: \"806e885f-b6fc-4e7c-a81a-31bff54b7b06\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-svf5f" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.801483 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw9cm\" (UniqueName: \"kubernetes.io/projected/866e339c-74a0-47e8-aff9-3463890568a9-kube-api-access-dw9cm\") pod \"neutron-operator-controller-manager-694c5bfc85-cvkg8\" (UID: \"866e339c-74a0-47e8-aff9-3463890568a9\") " pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-cvkg8" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.801523 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfv4r\" (UniqueName: \"kubernetes.io/projected/a7b94f34-87cf-4992-9480-4019281227c4-kube-api-access-kfv4r\") pod \"mariadb-operator-controller-manager-67bf948998-kh8pk\" (UID: \"a7b94f34-87cf-4992-9480-4019281227c4\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-kh8pk" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.801550 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a19393cd-d011-4387-9a34-07b67bd30d4e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g\" (UID: \"a19393cd-d011-4387-9a34-07b67bd30d4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.801611 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhpm9\" (UniqueName: \"kubernetes.io/projected/2fa4f067-8eed-44b7-995a-5160ee0576c6-kube-api-access-mhpm9\") pod \"placement-operator-controller-manager-5b964cf4cd-lrwg2\" (UID: \"2fa4f067-8eed-44b7-995a-5160ee0576c6\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-lrwg2" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.801673 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfmlh\" (UniqueName: \"kubernetes.io/projected/a19393cd-d011-4387-9a34-07b67bd30d4e-kube-api-access-kfmlh\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g\" (UID: \"a19393cd-d011-4387-9a34-07b67bd30d4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.807780 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-958664b5-98l68" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.814819 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-qvbqw"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.817017 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-qvbqw" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.823039 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-66njd" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.827770 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-qvbqw"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.828039 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-7jmcc" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.835735 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrqc2\" (UniqueName: \"kubernetes.io/projected/96fd0c1b-c934-4481-b198-38ca2cb9d187-kube-api-access-wrqc2\") pod \"manila-operator-controller-manager-765668569f-4ncq7\" (UID: \"96fd0c1b-c934-4481-b198-38ca2cb9d187\") " pod="openstack-operators/manila-operator-controller-manager-765668569f-4ncq7" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.838776 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfv4r\" (UniqueName: \"kubernetes.io/projected/a7b94f34-87cf-4992-9480-4019281227c4-kube-api-access-kfv4r\") pod \"mariadb-operator-controller-manager-67bf948998-kh8pk\" (UID: \"a7b94f34-87cf-4992-9480-4019281227c4\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-kh8pk" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.846145 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg2kz\" (UniqueName: \"kubernetes.io/projected/806e885f-b6fc-4e7c-a81a-31bff54b7b06-kube-api-access-kg2kz\") pod \"nova-operator-controller-manager-ddcbfd695-svf5f\" (UID: \"806e885f-b6fc-4e7c-a81a-31bff54b7b06\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-svf5f" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.851952 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw9cm\" (UniqueName: \"kubernetes.io/projected/866e339c-74a0-47e8-aff9-3463890568a9-kube-api-access-dw9cm\") pod \"neutron-operator-controller-manager-694c5bfc85-cvkg8\" (UID: \"866e339c-74a0-47e8-aff9-3463890568a9\") " pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-cvkg8" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.860621 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-d62vq"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.862310 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d62vq" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.867022 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-pzwkj" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.868726 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-d62vq"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.903062 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5z6h\" (UniqueName: \"kubernetes.io/projected/1441eab7-88f7-4278-b61e-15822bf73aca-kube-api-access-m5z6h\") pod \"swift-operator-controller-manager-68fc8c869-nrtm8\" (UID: \"1441eab7-88f7-4278-b61e-15822bf73aca\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nrtm8" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.903120 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-767b8bc766-rqmz6"] Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.903161 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt2vz\" (UniqueName: \"kubernetes.io/projected/ad851357-ed69-4ed0-80a4-1de2b1725d37-kube-api-access-nt2vz\") pod \"ovn-operator-controller-manager-788c46999f-4dvc8\" (UID: \"ad851357-ed69-4ed0-80a4-1de2b1725d37\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4dvc8" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.903194 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7b5m\" (UniqueName: \"kubernetes.io/projected/31e7b45a-da4b-4920-895a-d51dba36168e-kube-api-access-x7b5m\") pod \"octavia-operator-controller-manager-5c765b4558-dz587\" (UID: \"31e7b45a-da4b-4920-895a-d51dba36168e\") " pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-dz587" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.903229 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a19393cd-d011-4387-9a34-07b67bd30d4e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g\" (UID: \"a19393cd-d011-4387-9a34-07b67bd30d4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.903267 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhpm9\" (UniqueName: \"kubernetes.io/projected/2fa4f067-8eed-44b7-995a-5160ee0576c6-kube-api-access-mhpm9\") pod \"placement-operator-controller-manager-5b964cf4cd-lrwg2\" (UID: \"2fa4f067-8eed-44b7-995a-5160ee0576c6\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-lrwg2" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.903314 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfmlh\" (UniqueName: \"kubernetes.io/projected/a19393cd-d011-4387-9a34-07b67bd30d4e-kube-api-access-kfmlh\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g\" (UID: \"a19393cd-d011-4387-9a34-07b67bd30d4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g" Jan 28 15:35:42 crc kubenswrapper[4871]: E0128 15:35:42.903619 4871 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:35:42 crc kubenswrapper[4871]: E0128 15:35:42.903683 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a19393cd-d011-4387-9a34-07b67bd30d4e-cert podName:a19393cd-d011-4387-9a34-07b67bd30d4e nodeName:}" failed. No retries permitted until 2026-01-28 15:35:43.40366478 +0000 UTC m=+1095.299503102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a19393cd-d011-4387-9a34-07b67bd30d4e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g" (UID: "a19393cd-d011-4387-9a34-07b67bd30d4e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.906932 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-765668569f-4ncq7" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.906989 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-rqmz6" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.914934 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-x9v88" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.923300 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-kh8pk" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.931698 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfmlh\" (UniqueName: \"kubernetes.io/projected/a19393cd-d011-4387-9a34-07b67bd30d4e-kube-api-access-kfmlh\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g\" (UID: \"a19393cd-d011-4387-9a34-07b67bd30d4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.932006 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-cvkg8" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.946033 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhpm9\" (UniqueName: \"kubernetes.io/projected/2fa4f067-8eed-44b7-995a-5160ee0576c6-kube-api-access-mhpm9\") pod \"placement-operator-controller-manager-5b964cf4cd-lrwg2\" (UID: \"2fa4f067-8eed-44b7-995a-5160ee0576c6\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-lrwg2" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.946363 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7b5m\" (UniqueName: \"kubernetes.io/projected/31e7b45a-da4b-4920-895a-d51dba36168e-kube-api-access-x7b5m\") pod \"octavia-operator-controller-manager-5c765b4558-dz587\" (UID: \"31e7b45a-da4b-4920-895a-d51dba36168e\") " pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-dz587" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.946794 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt2vz\" (UniqueName: \"kubernetes.io/projected/ad851357-ed69-4ed0-80a4-1de2b1725d37-kube-api-access-nt2vz\") pod \"ovn-operator-controller-manager-788c46999f-4dvc8\" (UID: \"ad851357-ed69-4ed0-80a4-1de2b1725d37\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4dvc8" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.966138 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-svf5f" Jan 28 15:35:42 crc kubenswrapper[4871]: I0128 15:35:42.999056 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-767b8bc766-rqmz6"] Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.001561 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j"] Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.002361 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-dz587" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.004037 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5z6h\" (UniqueName: \"kubernetes.io/projected/1441eab7-88f7-4278-b61e-15822bf73aca-kube-api-access-m5z6h\") pod \"swift-operator-controller-manager-68fc8c869-nrtm8\" (UID: \"1441eab7-88f7-4278-b61e-15822bf73aca\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nrtm8" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.004082 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9sw4\" (UniqueName: \"kubernetes.io/projected/936d0985-4e97-40ed-b0c4-e0eb92d4372f-kube-api-access-z9sw4\") pod \"telemetry-operator-controller-manager-6d69b9c5db-qvbqw\" (UID: \"936d0985-4e97-40ed-b0c4-e0eb92d4372f\") " pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-qvbqw" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.004105 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hfqx\" (UniqueName: \"kubernetes.io/projected/8552dce5-130b-4598-8101-89ea1c19dc3a-kube-api-access-9hfqx\") pod \"test-operator-controller-manager-56f8bfcd9f-d62vq\" (UID: \"8552dce5-130b-4598-8101-89ea1c19dc3a\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d62vq" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.004134 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rknhf\" (UniqueName: \"kubernetes.io/projected/22008ca1-ed64-4a04-b45a-c9808ad68773-kube-api-access-rknhf\") pod \"watcher-operator-controller-manager-767b8bc766-rqmz6\" (UID: \"22008ca1-ed64-4a04-b45a-c9808ad68773\") " pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-rqmz6" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.008084 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.012391 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.012678 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-m59db" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.012800 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.031725 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5z6h\" (UniqueName: \"kubernetes.io/projected/1441eab7-88f7-4278-b61e-15822bf73aca-kube-api-access-m5z6h\") pod \"swift-operator-controller-manager-68fc8c869-nrtm8\" (UID: \"1441eab7-88f7-4278-b61e-15822bf73aca\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nrtm8" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.057929 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j"] Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.105215 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4dvc8" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.105524 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-d4p8j\" (UID: \"a1f1fd07-5c03-420c-bb27-e5ec2fece55b\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.105663 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9sw4\" (UniqueName: \"kubernetes.io/projected/936d0985-4e97-40ed-b0c4-e0eb92d4372f-kube-api-access-z9sw4\") pod \"telemetry-operator-controller-manager-6d69b9c5db-qvbqw\" (UID: \"936d0985-4e97-40ed-b0c4-e0eb92d4372f\") " pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-qvbqw" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.105698 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hfqx\" (UniqueName: \"kubernetes.io/projected/8552dce5-130b-4598-8101-89ea1c19dc3a-kube-api-access-9hfqx\") pod \"test-operator-controller-manager-56f8bfcd9f-d62vq\" (UID: \"8552dce5-130b-4598-8101-89ea1c19dc3a\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d62vq" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.105734 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rknhf\" (UniqueName: \"kubernetes.io/projected/22008ca1-ed64-4a04-b45a-c9808ad68773-kube-api-access-rknhf\") pod \"watcher-operator-controller-manager-767b8bc766-rqmz6\" (UID: \"22008ca1-ed64-4a04-b45a-c9808ad68773\") " pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-rqmz6" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.105760 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-metrics-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-d4p8j\" (UID: \"a1f1fd07-5c03-420c-bb27-e5ec2fece55b\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.105849 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvjkw\" (UniqueName: \"kubernetes.io/projected/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-kube-api-access-jvjkw\") pod \"openstack-operator-controller-manager-57d89bf95c-d4p8j\" (UID: \"a1f1fd07-5c03-420c-bb27-e5ec2fece55b\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.119850 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-lrwg2" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.120300 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bg5kq"] Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.122958 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bg5kq" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.127624 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-rc62w" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.133258 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hfqx\" (UniqueName: \"kubernetes.io/projected/8552dce5-130b-4598-8101-89ea1c19dc3a-kube-api-access-9hfqx\") pod \"test-operator-controller-manager-56f8bfcd9f-d62vq\" (UID: \"8552dce5-130b-4598-8101-89ea1c19dc3a\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d62vq" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.137568 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bg5kq"] Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.139687 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rknhf\" (UniqueName: \"kubernetes.io/projected/22008ca1-ed64-4a04-b45a-c9808ad68773-kube-api-access-rknhf\") pod \"watcher-operator-controller-manager-767b8bc766-rqmz6\" (UID: \"22008ca1-ed64-4a04-b45a-c9808ad68773\") " pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-rqmz6" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.151388 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9sw4\" (UniqueName: \"kubernetes.io/projected/936d0985-4e97-40ed-b0c4-e0eb92d4372f-kube-api-access-z9sw4\") pod \"telemetry-operator-controller-manager-6d69b9c5db-qvbqw\" (UID: \"936d0985-4e97-40ed-b0c4-e0eb92d4372f\") " pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-qvbqw" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.208131 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-metrics-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-d4p8j\" (UID: \"a1f1fd07-5c03-420c-bb27-e5ec2fece55b\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.208265 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0-cert\") pod \"infra-operator-controller-manager-79955696d6-vnt6l\" (UID: \"91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vnt6l" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.208303 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvjkw\" (UniqueName: \"kubernetes.io/projected/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-kube-api-access-jvjkw\") pod \"openstack-operator-controller-manager-57d89bf95c-d4p8j\" (UID: \"a1f1fd07-5c03-420c-bb27-e5ec2fece55b\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.208350 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-d4p8j\" (UID: \"a1f1fd07-5c03-420c-bb27-e5ec2fece55b\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" Jan 28 15:35:43 crc kubenswrapper[4871]: E0128 15:35:43.208521 4871 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:35:43 crc kubenswrapper[4871]: E0128 15:35:43.208586 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-webhook-certs podName:a1f1fd07-5c03-420c-bb27-e5ec2fece55b nodeName:}" failed. No retries permitted until 2026-01-28 15:35:43.708564732 +0000 UTC m=+1095.604403054 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-webhook-certs") pod "openstack-operator-controller-manager-57d89bf95c-d4p8j" (UID: "a1f1fd07-5c03-420c-bb27-e5ec2fece55b") : secret "webhook-server-cert" not found Jan 28 15:35:43 crc kubenswrapper[4871]: E0128 15:35:43.208891 4871 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:35:43 crc kubenswrapper[4871]: E0128 15:35:43.208972 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0-cert podName:91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0 nodeName:}" failed. No retries permitted until 2026-01-28 15:35:44.208950924 +0000 UTC m=+1096.104789326 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0-cert") pod "infra-operator-controller-manager-79955696d6-vnt6l" (UID: "91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:35:43 crc kubenswrapper[4871]: E0128 15:35:43.208896 4871 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:35:43 crc kubenswrapper[4871]: E0128 15:35:43.209016 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-metrics-certs podName:a1f1fd07-5c03-420c-bb27-e5ec2fece55b nodeName:}" failed. No retries permitted until 2026-01-28 15:35:43.709009536 +0000 UTC m=+1095.604847988 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-metrics-certs") pod "openstack-operator-controller-manager-57d89bf95c-d4p8j" (UID: "a1f1fd07-5c03-420c-bb27-e5ec2fece55b") : secret "metrics-server-cert" not found Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.229745 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvjkw\" (UniqueName: \"kubernetes.io/projected/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-kube-api-access-jvjkw\") pod \"openstack-operator-controller-manager-57d89bf95c-d4p8j\" (UID: \"a1f1fd07-5c03-420c-bb27-e5ec2fece55b\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.261352 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nrtm8" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.306021 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-qvbqw" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.309088 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp6dm\" (UniqueName: \"kubernetes.io/projected/3481a933-7882-4030-b852-9eb2f9f89b88-kube-api-access-kp6dm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-bg5kq\" (UID: \"3481a933-7882-4030-b852-9eb2f9f89b88\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bg5kq" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.320258 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d62vq" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.342181 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-rqmz6" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.411263 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp6dm\" (UniqueName: \"kubernetes.io/projected/3481a933-7882-4030-b852-9eb2f9f89b88-kube-api-access-kp6dm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-bg5kq\" (UID: \"3481a933-7882-4030-b852-9eb2f9f89b88\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bg5kq" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.411375 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a19393cd-d011-4387-9a34-07b67bd30d4e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g\" (UID: \"a19393cd-d011-4387-9a34-07b67bd30d4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g" Jan 28 15:35:43 crc kubenswrapper[4871]: E0128 15:35:43.411558 4871 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:35:43 crc kubenswrapper[4871]: E0128 15:35:43.411635 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a19393cd-d011-4387-9a34-07b67bd30d4e-cert podName:a19393cd-d011-4387-9a34-07b67bd30d4e nodeName:}" failed. No retries permitted until 2026-01-28 15:35:44.411616539 +0000 UTC m=+1096.307454861 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a19393cd-d011-4387-9a34-07b67bd30d4e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g" (UID: "a19393cd-d011-4387-9a34-07b67bd30d4e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.439091 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp6dm\" (UniqueName: \"kubernetes.io/projected/3481a933-7882-4030-b852-9eb2f9f89b88-kube-api-access-kp6dm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-bg5kq\" (UID: \"3481a933-7882-4030-b852-9eb2f9f89b88\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bg5kq" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.454252 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bg5kq" Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.664863 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66dfbd6f5d-vjjlf"] Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.688409 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hkk6l"] Jan 28 15:35:43 crc kubenswrapper[4871]: W0128 15:35:43.710026 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod270211d8_fb57_4cb0_ba0b_9de5ae660e2e.slice/crio-7fd0d93753631b0c9cb0ed71e95f43a312e242cfbad1c3f1fc636b38388775ac WatchSource:0}: Error finding container 7fd0d93753631b0c9cb0ed71e95f43a312e242cfbad1c3f1fc636b38388775ac: Status 404 returned error can't find the container with id 7fd0d93753631b0c9cb0ed71e95f43a312e242cfbad1c3f1fc636b38388775ac Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.716808 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-d4p8j\" (UID: \"a1f1fd07-5c03-420c-bb27-e5ec2fece55b\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" Jan 28 15:35:43 crc kubenswrapper[4871]: E0128 15:35:43.716933 4871 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:35:43 crc kubenswrapper[4871]: E0128 15:35:43.716984 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-webhook-certs podName:a1f1fd07-5c03-420c-bb27-e5ec2fece55b nodeName:}" failed. No retries permitted until 2026-01-28 15:35:44.716970414 +0000 UTC m=+1096.612808736 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-webhook-certs") pod "openstack-operator-controller-manager-57d89bf95c-d4p8j" (UID: "a1f1fd07-5c03-420c-bb27-e5ec2fece55b") : secret "webhook-server-cert" not found Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.717335 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-metrics-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-d4p8j\" (UID: \"a1f1fd07-5c03-420c-bb27-e5ec2fece55b\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" Jan 28 15:35:43 crc kubenswrapper[4871]: E0128 15:35:43.717466 4871 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:35:43 crc kubenswrapper[4871]: E0128 15:35:43.717494 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-metrics-certs podName:a1f1fd07-5c03-420c-bb27-e5ec2fece55b nodeName:}" failed. No retries permitted until 2026-01-28 15:35:44.717486701 +0000 UTC m=+1096.613325023 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-metrics-certs") pod "openstack-operator-controller-manager-57d89bf95c-d4p8j" (UID: "a1f1fd07-5c03-420c-bb27-e5ec2fece55b") : secret "metrics-server-cert" not found Jan 28 15:35:43 crc kubenswrapper[4871]: I0128 15:35:43.745791 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-f6487bd57-tb9cx"] Jan 28 15:35:43 crc kubenswrapper[4871]: W0128 15:35:43.751942 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f068c70_d72a_4582_96e6_891b7269b1ba.slice/crio-fc7fd0910e6afff9795ec055b27eb1e04be93f55b3e47586ac20f21cce8d93d9 WatchSource:0}: Error finding container fc7fd0910e6afff9795ec055b27eb1e04be93f55b3e47586ac20f21cce8d93d9: Status 404 returned error can't find the container with id fc7fd0910e6afff9795ec055b27eb1e04be93f55b3e47586ac20f21cce8d93d9 Jan 28 15:35:44 crc kubenswrapper[4871]: I0128 15:35:44.021679 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-vjjlf" event={"ID":"95211b62-9193-4fe4-b851-fe46793fac5b","Type":"ContainerStarted","Data":"d5afeebd581be06f1b1d8d1b183f5cacd850409bf38e0561010e37d6b34407ea"} Jan 28 15:35:44 crc kubenswrapper[4871]: I0128 15:35:44.022799 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-tb9cx" event={"ID":"4f068c70-d72a-4582-96e6-891b7269b1ba","Type":"ContainerStarted","Data":"fc7fd0910e6afff9795ec055b27eb1e04be93f55b3e47586ac20f21cce8d93d9"} Jan 28 15:35:44 crc kubenswrapper[4871]: I0128 15:35:44.024150 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hkk6l" event={"ID":"270211d8-fb57-4cb0-ba0b-9de5ae660e2e","Type":"ContainerStarted","Data":"7fd0d93753631b0c9cb0ed71e95f43a312e242cfbad1c3f1fc636b38388775ac"} Jan 28 15:35:44 crc kubenswrapper[4871]: I0128 15:35:44.116794 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-958664b5-98l68"] Jan 28 15:35:44 crc kubenswrapper[4871]: I0128 15:35:44.130005 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-svf5f"] Jan 28 15:35:44 crc kubenswrapper[4871]: I0128 15:35:44.137428 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-mvh4p"] Jan 28 15:35:44 crc kubenswrapper[4871]: I0128 15:35:44.155988 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6db5dbd896-l6xx2"] Jan 28 15:35:44 crc kubenswrapper[4871]: W0128 15:35:44.157870 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod806e885f_b6fc_4e7c_a81a_31bff54b7b06.slice/crio-fbfa956eeea6be40ee0c9dd50991cb1d428d5a1957ce03ede0595a06c2ba5874 WatchSource:0}: Error finding container fbfa956eeea6be40ee0c9dd50991cb1d428d5a1957ce03ede0595a06c2ba5874: Status 404 returned error can't find the container with id fbfa956eeea6be40ee0c9dd50991cb1d428d5a1957ce03ede0595a06c2ba5874 Jan 28 15:35:44 crc kubenswrapper[4871]: W0128 15:35:44.173805 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1b132cc_7a24_4a38_bf0b_6d26b36e551b.slice/crio-04ca1238f3fbace5aa80be77b01d02236bf8f438b1acc4ca91936e7a0f26154b WatchSource:0}: Error finding container 04ca1238f3fbace5aa80be77b01d02236bf8f438b1acc4ca91936e7a0f26154b: Status 404 returned error can't find the container with id 04ca1238f3fbace5aa80be77b01d02236bf8f438b1acc4ca91936e7a0f26154b Jan 28 15:35:44 crc kubenswrapper[4871]: I0128 15:35:44.174642 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-kh8pk"] Jan 28 15:35:44 crc kubenswrapper[4871]: W0128 15:35:44.187485 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad851357_ed69_4ed0_80a4_1de2b1725d37.slice/crio-be32ee0ceedb17d15b3a8ce07b990fb3896127f988c5594a926e0e0ed3ccb950 WatchSource:0}: Error finding container be32ee0ceedb17d15b3a8ce07b990fb3896127f988c5594a926e0e0ed3ccb950: Status 404 returned error can't find the container with id be32ee0ceedb17d15b3a8ce07b990fb3896127f988c5594a926e0e0ed3ccb950 Jan 28 15:35:44 crc kubenswrapper[4871]: I0128 15:35:44.189138 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-4dvc8"] Jan 28 15:35:44 crc kubenswrapper[4871]: I0128 15:35:44.196737 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-694c5bfc85-cvkg8"] Jan 28 15:35:44 crc kubenswrapper[4871]: W0128 15:35:44.205917 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod866e339c_74a0_47e8_aff9_3463890568a9.slice/crio-1f4399584c4e2ba37a57dc0bf05f55cee1a35f9a90832612894ab34066d41148 WatchSource:0}: Error finding container 1f4399584c4e2ba37a57dc0bf05f55cee1a35f9a90832612894ab34066d41148: Status 404 returned error can't find the container with id 1f4399584c4e2ba37a57dc0bf05f55cee1a35f9a90832612894ab34066d41148 Jan 28 15:35:44 crc kubenswrapper[4871]: I0128 15:35:44.207230 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-lrwg2"] Jan 28 15:35:44 crc kubenswrapper[4871]: W0128 15:35:44.209492 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod016e77e5_e2ea_4284_966f_16c5773febce.slice/crio-9950250960e537cfe51cec50e13317a92847c598fb655a8fee0b0cacd16955f8 WatchSource:0}: Error finding container 9950250960e537cfe51cec50e13317a92847c598fb655a8fee0b0cacd16955f8: Status 404 returned error can't find the container with id 9950250960e537cfe51cec50e13317a92847c598fb655a8fee0b0cacd16955f8 Jan 28 15:35:44 crc kubenswrapper[4871]: I0128 15:35:44.219497 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5c765b4558-dz587"] Jan 28 15:35:44 crc kubenswrapper[4871]: I0128 15:35:44.229005 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-587c6bfdcf-ngtrg"] Jan 28 15:35:44 crc kubenswrapper[4871]: I0128 15:35:44.229098 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0-cert\") pod \"infra-operator-controller-manager-79955696d6-vnt6l\" (UID: \"91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vnt6l" Jan 28 15:35:44 crc kubenswrapper[4871]: E0128 15:35:44.229285 4871 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:35:44 crc kubenswrapper[4871]: E0128 15:35:44.229323 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0-cert podName:91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0 nodeName:}" failed. No retries permitted until 2026-01-28 15:35:46.229309602 +0000 UTC m=+1098.125147924 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0-cert") pod "infra-operator-controller-manager-79955696d6-vnt6l" (UID: "91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:35:44 crc kubenswrapper[4871]: I0128 15:35:44.240492 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b84b46695-7jmcc"] Jan 28 15:35:44 crc kubenswrapper[4871]: I0128 15:35:44.248195 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-765668569f-4ncq7"] Jan 28 15:35:44 crc kubenswrapper[4871]: E0128 15:35:44.249352 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kfv4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-kh8pk_openstack-operators(a7b94f34-87cf-4992-9480-4019281227c4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:35:44 crc kubenswrapper[4871]: E0128 15:35:44.249501 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/heat-operator@sha256:429171b44a24e9e4dde46465d90a272d93b15317ea386184d6ad077cc119d3c9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gkfmn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-587c6bfdcf-ngtrg_openstack-operators(74c0f096-51ac-459a-b9f2-a7cb7f462734): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:35:44 crc kubenswrapper[4871]: E0128 15:35:44.250612 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-ngtrg" podUID="74c0f096-51ac-459a-b9f2-a7cb7f462734" Jan 28 15:35:44 crc kubenswrapper[4871]: E0128 15:35:44.250633 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-kh8pk" podUID="a7b94f34-87cf-4992-9480-4019281227c4" Jan 28 15:35:44 crc kubenswrapper[4871]: E0128 15:35:44.251377 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m5z6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-nrtm8_openstack-operators(1441eab7-88f7-4278-b61e-15822bf73aca): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:35:44 crc kubenswrapper[4871]: W0128 15:35:44.252580 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96fd0c1b_c934_4481_b198_38ca2cb9d187.slice/crio-c00d8310703825ea7f833e3a3206483786a88f68e7d380b8261798719cd36d67 WatchSource:0}: Error finding container c00d8310703825ea7f833e3a3206483786a88f68e7d380b8261798719cd36d67: Status 404 returned error can't find the container with id c00d8310703825ea7f833e3a3206483786a88f68e7d380b8261798719cd36d67 Jan 28 15:35:44 crc kubenswrapper[4871]: E0128 15:35:44.252772 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nrtm8" podUID="1441eab7-88f7-4278-b61e-15822bf73aca" Jan 28 15:35:44 crc kubenswrapper[4871]: W0128 15:35:44.254161 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8552dce5_130b_4598_8101_89ea1c19dc3a.slice/crio-b0cabafc7939581c6774c0111bde2495a0c901d3c3598c854972680adc91e3b6 WatchSource:0}: Error finding container b0cabafc7939581c6774c0111bde2495a0c901d3c3598c854972680adc91e3b6: Status 404 returned error can't find the container with id b0cabafc7939581c6774c0111bde2495a0c901d3c3598c854972680adc91e3b6 Jan 28 15:35:44 crc kubenswrapper[4871]: I0128 15:35:44.256790 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-nrtm8"] Jan 28 15:35:44 crc kubenswrapper[4871]: E0128 15:35:44.257698 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9hfqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-d62vq_openstack-operators(8552dce5-130b-4598-8101-89ea1c19dc3a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:35:44 crc kubenswrapper[4871]: E0128 15:35:44.257730 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/manila-operator@sha256:2e1a77365c3b08ff39892565abfc72b72e969f623e58a2663fb93890371fc9da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wrqc2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-765668569f-4ncq7_openstack-operators(96fd0c1b-c934-4481-b198-38ca2cb9d187): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:35:44 crc kubenswrapper[4871]: E0128 15:35:44.259196 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-765668569f-4ncq7" podUID="96fd0c1b-c934-4481-b198-38ca2cb9d187" Jan 28 15:35:44 crc kubenswrapper[4871]: E0128 15:35:44.259209 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d62vq" podUID="8552dce5-130b-4598-8101-89ea1c19dc3a" Jan 28 15:35:44 crc kubenswrapper[4871]: E0128 15:35:44.260824 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:35f1eb96f42069bb8f7c33942fb86b41843ba02803464245c16192ccda3d50e4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rknhf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-767b8bc766-rqmz6_openstack-operators(22008ca1-ed64-4a04-b45a-c9808ad68773): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:35:44 crc kubenswrapper[4871]: E0128 15:35:44.260996 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:c9d639f3d01f7a4f139a8b7fb751ca880893f7b9a4e596d6a5304534e46392ba,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z9sw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6d69b9c5db-qvbqw_openstack-operators(936d0985-4e97-40ed-b0c4-e0eb92d4372f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:35:44 crc kubenswrapper[4871]: E0128 15:35:44.261280 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kp6dm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-bg5kq_openstack-operators(3481a933-7882-4030-b852-9eb2f9f89b88): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:35:44 crc kubenswrapper[4871]: E0128 15:35:44.261898 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-rqmz6" podUID="22008ca1-ed64-4a04-b45a-c9808ad68773" Jan 28 15:35:44 crc kubenswrapper[4871]: E0128 15:35:44.262350 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bg5kq" podUID="3481a933-7882-4030-b852-9eb2f9f89b88" Jan 28 15:35:44 crc kubenswrapper[4871]: I0128 15:35:44.262354 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-d62vq"] Jan 28 15:35:44 crc kubenswrapper[4871]: E0128 15:35:44.262427 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-qvbqw" podUID="936d0985-4e97-40ed-b0c4-e0eb92d4372f" Jan 28 15:35:44 crc kubenswrapper[4871]: I0128 15:35:44.269464 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-767b8bc766-rqmz6"] Jan 28 15:35:44 crc kubenswrapper[4871]: I0128 15:35:44.276993 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-qvbqw"] Jan 28 15:35:44 crc kubenswrapper[4871]: I0128 15:35:44.284484 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bg5kq"] Jan 28 15:35:44 crc kubenswrapper[4871]: I0128 15:35:44.431628 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a19393cd-d011-4387-9a34-07b67bd30d4e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g\" (UID: \"a19393cd-d011-4387-9a34-07b67bd30d4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g" Jan 28 15:35:44 crc kubenswrapper[4871]: E0128 15:35:44.431868 4871 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:35:44 crc kubenswrapper[4871]: E0128 15:35:44.431980 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a19393cd-d011-4387-9a34-07b67bd30d4e-cert podName:a19393cd-d011-4387-9a34-07b67bd30d4e nodeName:}" failed. No retries permitted until 2026-01-28 15:35:46.431954327 +0000 UTC m=+1098.327792649 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a19393cd-d011-4387-9a34-07b67bd30d4e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g" (UID: "a19393cd-d011-4387-9a34-07b67bd30d4e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:35:44 crc kubenswrapper[4871]: I0128 15:35:44.735091 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-metrics-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-d4p8j\" (UID: \"a1f1fd07-5c03-420c-bb27-e5ec2fece55b\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" Jan 28 15:35:44 crc kubenswrapper[4871]: I0128 15:35:44.735220 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-d4p8j\" (UID: \"a1f1fd07-5c03-420c-bb27-e5ec2fece55b\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" Jan 28 15:35:44 crc kubenswrapper[4871]: E0128 15:35:44.735305 4871 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:35:44 crc kubenswrapper[4871]: E0128 15:35:44.735370 4871 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:35:44 crc kubenswrapper[4871]: E0128 15:35:44.735395 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-metrics-certs podName:a1f1fd07-5c03-420c-bb27-e5ec2fece55b nodeName:}" failed. No retries permitted until 2026-01-28 15:35:46.735374482 +0000 UTC m=+1098.631212844 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-metrics-certs") pod "openstack-operator-controller-manager-57d89bf95c-d4p8j" (UID: "a1f1fd07-5c03-420c-bb27-e5ec2fece55b") : secret "metrics-server-cert" not found Jan 28 15:35:44 crc kubenswrapper[4871]: E0128 15:35:44.735432 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-webhook-certs podName:a1f1fd07-5c03-420c-bb27-e5ec2fece55b nodeName:}" failed. No retries permitted until 2026-01-28 15:35:46.735417113 +0000 UTC m=+1098.631255505 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-webhook-certs") pod "openstack-operator-controller-manager-57d89bf95c-d4p8j" (UID: "a1f1fd07-5c03-420c-bb27-e5ec2fece55b") : secret "webhook-server-cert" not found Jan 28 15:35:45 crc kubenswrapper[4871]: I0128 15:35:45.045896 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-kh8pk" event={"ID":"a7b94f34-87cf-4992-9480-4019281227c4","Type":"ContainerStarted","Data":"28302a315f57dc08158c75cb43c29efa44ffa36be6211099622bfc5f8b99e02d"} Jan 28 15:35:45 crc kubenswrapper[4871]: E0128 15:35:45.053268 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-kh8pk" podUID="a7b94f34-87cf-4992-9480-4019281227c4" Jan 28 15:35:45 crc kubenswrapper[4871]: I0128 15:35:45.058465 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-cvkg8" event={"ID":"866e339c-74a0-47e8-aff9-3463890568a9","Type":"ContainerStarted","Data":"1f4399584c4e2ba37a57dc0bf05f55cee1a35f9a90832612894ab34066d41148"} Jan 28 15:35:45 crc kubenswrapper[4871]: I0128 15:35:45.076786 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d62vq" event={"ID":"8552dce5-130b-4598-8101-89ea1c19dc3a","Type":"ContainerStarted","Data":"b0cabafc7939581c6774c0111bde2495a0c901d3c3598c854972680adc91e3b6"} Jan 28 15:35:45 crc kubenswrapper[4871]: E0128 15:35:45.080000 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d62vq" podUID="8552dce5-130b-4598-8101-89ea1c19dc3a" Jan 28 15:35:45 crc kubenswrapper[4871]: I0128 15:35:45.089486 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-lrwg2" event={"ID":"2fa4f067-8eed-44b7-995a-5160ee0576c6","Type":"ContainerStarted","Data":"011593867d188085e0cb3ba45bc04f39e6fb38034699268d14a0b0b2fcd1b2ec"} Jan 28 15:35:45 crc kubenswrapper[4871]: I0128 15:35:45.101781 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-dz587" event={"ID":"31e7b45a-da4b-4920-895a-d51dba36168e","Type":"ContainerStarted","Data":"182a7025ef7d9dacd49bde4f9247293804311a82970e0d3707f73aa1e05f3756"} Jan 28 15:35:45 crc kubenswrapper[4871]: I0128 15:35:45.102930 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nrtm8" event={"ID":"1441eab7-88f7-4278-b61e-15822bf73aca","Type":"ContainerStarted","Data":"af03aae58d590e9b2c2c53302192c36d8a149d77c6599e21740ea866648b62c3"} Jan 28 15:35:45 crc kubenswrapper[4871]: E0128 15:35:45.104337 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nrtm8" podUID="1441eab7-88f7-4278-b61e-15822bf73aca" Jan 28 15:35:45 crc kubenswrapper[4871]: I0128 15:35:45.104956 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-l6xx2" event={"ID":"310a6fc1-965a-4af9-ab12-2c9b2f8046ff","Type":"ContainerStarted","Data":"7a53247babe98058261ce50aa2b8adefe5d93f58b3fe74b6e7d04bc1eb748b64"} Jan 28 15:35:45 crc kubenswrapper[4871]: I0128 15:35:45.105776 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-ngtrg" event={"ID":"74c0f096-51ac-459a-b9f2-a7cb7f462734","Type":"ContainerStarted","Data":"6ba05638786f91008083fb7e1e16d2e62c754da96944e3d0f9525e1574739a4b"} Jan 28 15:35:45 crc kubenswrapper[4871]: E0128 15:35:45.107003 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/heat-operator@sha256:429171b44a24e9e4dde46465d90a272d93b15317ea386184d6ad077cc119d3c9\\\"\"" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-ngtrg" podUID="74c0f096-51ac-459a-b9f2-a7cb7f462734" Jan 28 15:35:45 crc kubenswrapper[4871]: I0128 15:35:45.110694 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-765668569f-4ncq7" event={"ID":"96fd0c1b-c934-4481-b198-38ca2cb9d187","Type":"ContainerStarted","Data":"c00d8310703825ea7f833e3a3206483786a88f68e7d380b8261798719cd36d67"} Jan 28 15:35:45 crc kubenswrapper[4871]: I0128 15:35:45.111967 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-7jmcc" event={"ID":"016e77e5-e2ea-4284-966f-16c5773febce","Type":"ContainerStarted","Data":"9950250960e537cfe51cec50e13317a92847c598fb655a8fee0b0cacd16955f8"} Jan 28 15:35:45 crc kubenswrapper[4871]: E0128 15:35:45.113206 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/manila-operator@sha256:2e1a77365c3b08ff39892565abfc72b72e969f623e58a2663fb93890371fc9da\\\"\"" pod="openstack-operators/manila-operator-controller-manager-765668569f-4ncq7" podUID="96fd0c1b-c934-4481-b198-38ca2cb9d187" Jan 28 15:35:45 crc kubenswrapper[4871]: I0128 15:35:45.114997 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4dvc8" event={"ID":"ad851357-ed69-4ed0-80a4-1de2b1725d37","Type":"ContainerStarted","Data":"be32ee0ceedb17d15b3a8ce07b990fb3896127f988c5594a926e0e0ed3ccb950"} Jan 28 15:35:45 crc kubenswrapper[4871]: I0128 15:35:45.143339 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-958664b5-98l68" event={"ID":"b1b132cc-7a24-4a38-bf0b-6d26b36e551b","Type":"ContainerStarted","Data":"04ca1238f3fbace5aa80be77b01d02236bf8f438b1acc4ca91936e7a0f26154b"} Jan 28 15:35:45 crc kubenswrapper[4871]: I0128 15:35:45.145059 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-mvh4p" event={"ID":"41353a5b-bb79-45e7-8135-8229fa386ce4","Type":"ContainerStarted","Data":"6266f101c4f38aa48994d48e4ff6196ce952678a1f1824ce53045825428fe784"} Jan 28 15:35:45 crc kubenswrapper[4871]: I0128 15:35:45.158326 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bg5kq" event={"ID":"3481a933-7882-4030-b852-9eb2f9f89b88","Type":"ContainerStarted","Data":"9d22162b0904ae64ceda0fccb3d6911c33d2701d5229007f8833571ca4569850"} Jan 28 15:35:45 crc kubenswrapper[4871]: E0128 15:35:45.161036 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bg5kq" podUID="3481a933-7882-4030-b852-9eb2f9f89b88" Jan 28 15:35:45 crc kubenswrapper[4871]: I0128 15:35:45.162441 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-svf5f" event={"ID":"806e885f-b6fc-4e7c-a81a-31bff54b7b06","Type":"ContainerStarted","Data":"fbfa956eeea6be40ee0c9dd50991cb1d428d5a1957ce03ede0595a06c2ba5874"} Jan 28 15:35:45 crc kubenswrapper[4871]: I0128 15:35:45.165904 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-qvbqw" event={"ID":"936d0985-4e97-40ed-b0c4-e0eb92d4372f","Type":"ContainerStarted","Data":"88c8b28a3b5ef7a480632b873b50d06bdb3fc9454ce5a95f61dec6316b4d6d74"} Jan 28 15:35:45 crc kubenswrapper[4871]: I0128 15:35:45.168830 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-rqmz6" event={"ID":"22008ca1-ed64-4a04-b45a-c9808ad68773","Type":"ContainerStarted","Data":"74b0d584d73e7e1b0ba97bcfa90185eecfc299a35b1bb450bdced82732afc729"} Jan 28 15:35:45 crc kubenswrapper[4871]: E0128 15:35:45.168957 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:c9d639f3d01f7a4f139a8b7fb751ca880893f7b9a4e596d6a5304534e46392ba\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-qvbqw" podUID="936d0985-4e97-40ed-b0c4-e0eb92d4372f" Jan 28 15:35:45 crc kubenswrapper[4871]: E0128 15:35:45.172739 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:35f1eb96f42069bb8f7c33942fb86b41843ba02803464245c16192ccda3d50e4\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-rqmz6" podUID="22008ca1-ed64-4a04-b45a-c9808ad68773" Jan 28 15:35:46 crc kubenswrapper[4871]: E0128 15:35:46.187035 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d62vq" podUID="8552dce5-130b-4598-8101-89ea1c19dc3a" Jan 28 15:35:46 crc kubenswrapper[4871]: E0128 15:35:46.187426 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:35f1eb96f42069bb8f7c33942fb86b41843ba02803464245c16192ccda3d50e4\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-rqmz6" podUID="22008ca1-ed64-4a04-b45a-c9808ad68773" Jan 28 15:35:46 crc kubenswrapper[4871]: E0128 15:35:46.187543 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/heat-operator@sha256:429171b44a24e9e4dde46465d90a272d93b15317ea386184d6ad077cc119d3c9\\\"\"" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-ngtrg" podUID="74c0f096-51ac-459a-b9f2-a7cb7f462734" Jan 28 15:35:46 crc kubenswrapper[4871]: E0128 15:35:46.187596 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nrtm8" podUID="1441eab7-88f7-4278-b61e-15822bf73aca" Jan 28 15:35:46 crc kubenswrapper[4871]: E0128 15:35:46.187666 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bg5kq" podUID="3481a933-7882-4030-b852-9eb2f9f89b88" Jan 28 15:35:46 crc kubenswrapper[4871]: E0128 15:35:46.188275 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/manila-operator@sha256:2e1a77365c3b08ff39892565abfc72b72e969f623e58a2663fb93890371fc9da\\\"\"" pod="openstack-operators/manila-operator-controller-manager-765668569f-4ncq7" podUID="96fd0c1b-c934-4481-b198-38ca2cb9d187" Jan 28 15:35:46 crc kubenswrapper[4871]: E0128 15:35:46.189175 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-kh8pk" podUID="a7b94f34-87cf-4992-9480-4019281227c4" Jan 28 15:35:46 crc kubenswrapper[4871]: E0128 15:35:46.191539 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:c9d639f3d01f7a4f139a8b7fb751ca880893f7b9a4e596d6a5304534e46392ba\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-qvbqw" podUID="936d0985-4e97-40ed-b0c4-e0eb92d4372f" Jan 28 15:35:46 crc kubenswrapper[4871]: I0128 15:35:46.267069 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0-cert\") pod \"infra-operator-controller-manager-79955696d6-vnt6l\" (UID: \"91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vnt6l" Jan 28 15:35:46 crc kubenswrapper[4871]: E0128 15:35:46.267193 4871 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:35:46 crc kubenswrapper[4871]: E0128 15:35:46.267235 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0-cert podName:91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0 nodeName:}" failed. No retries permitted until 2026-01-28 15:35:50.26722204 +0000 UTC m=+1102.163060362 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0-cert") pod "infra-operator-controller-manager-79955696d6-vnt6l" (UID: "91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:35:46 crc kubenswrapper[4871]: I0128 15:35:46.469915 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a19393cd-d011-4387-9a34-07b67bd30d4e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g\" (UID: \"a19393cd-d011-4387-9a34-07b67bd30d4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g" Jan 28 15:35:46 crc kubenswrapper[4871]: E0128 15:35:46.470103 4871 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:35:46 crc kubenswrapper[4871]: E0128 15:35:46.470202 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a19393cd-d011-4387-9a34-07b67bd30d4e-cert podName:a19393cd-d011-4387-9a34-07b67bd30d4e nodeName:}" failed. No retries permitted until 2026-01-28 15:35:50.470179675 +0000 UTC m=+1102.366018087 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a19393cd-d011-4387-9a34-07b67bd30d4e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g" (UID: "a19393cd-d011-4387-9a34-07b67bd30d4e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:35:46 crc kubenswrapper[4871]: I0128 15:35:46.773710 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-d4p8j\" (UID: \"a1f1fd07-5c03-420c-bb27-e5ec2fece55b\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" Jan 28 15:35:46 crc kubenswrapper[4871]: I0128 15:35:46.773810 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-metrics-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-d4p8j\" (UID: \"a1f1fd07-5c03-420c-bb27-e5ec2fece55b\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" Jan 28 15:35:46 crc kubenswrapper[4871]: E0128 15:35:46.773899 4871 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:35:46 crc kubenswrapper[4871]: E0128 15:35:46.773968 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-webhook-certs podName:a1f1fd07-5c03-420c-bb27-e5ec2fece55b nodeName:}" failed. No retries permitted until 2026-01-28 15:35:50.77395119 +0000 UTC m=+1102.669789512 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-webhook-certs") pod "openstack-operator-controller-manager-57d89bf95c-d4p8j" (UID: "a1f1fd07-5c03-420c-bb27-e5ec2fece55b") : secret "webhook-server-cert" not found Jan 28 15:35:46 crc kubenswrapper[4871]: E0128 15:35:46.774009 4871 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:35:46 crc kubenswrapper[4871]: E0128 15:35:46.774091 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-metrics-certs podName:a1f1fd07-5c03-420c-bb27-e5ec2fece55b nodeName:}" failed. No retries permitted until 2026-01-28 15:35:50.774072464 +0000 UTC m=+1102.669910786 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-metrics-certs") pod "openstack-operator-controller-manager-57d89bf95c-d4p8j" (UID: "a1f1fd07-5c03-420c-bb27-e5ec2fece55b") : secret "metrics-server-cert" not found Jan 28 15:35:50 crc kubenswrapper[4871]: I0128 15:35:50.337450 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0-cert\") pod \"infra-operator-controller-manager-79955696d6-vnt6l\" (UID: \"91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vnt6l" Jan 28 15:35:50 crc kubenswrapper[4871]: E0128 15:35:50.337667 4871 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:35:50 crc kubenswrapper[4871]: E0128 15:35:50.337715 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0-cert podName:91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0 nodeName:}" failed. No retries permitted until 2026-01-28 15:35:58.337700098 +0000 UTC m=+1110.233538420 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0-cert") pod "infra-operator-controller-manager-79955696d6-vnt6l" (UID: "91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:35:50 crc kubenswrapper[4871]: I0128 15:35:50.540719 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a19393cd-d011-4387-9a34-07b67bd30d4e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g\" (UID: \"a19393cd-d011-4387-9a34-07b67bd30d4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g" Jan 28 15:35:50 crc kubenswrapper[4871]: E0128 15:35:50.540857 4871 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:35:50 crc kubenswrapper[4871]: E0128 15:35:50.541127 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a19393cd-d011-4387-9a34-07b67bd30d4e-cert podName:a19393cd-d011-4387-9a34-07b67bd30d4e nodeName:}" failed. No retries permitted until 2026-01-28 15:35:58.541113078 +0000 UTC m=+1110.436951390 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a19393cd-d011-4387-9a34-07b67bd30d4e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g" (UID: "a19393cd-d011-4387-9a34-07b67bd30d4e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:35:50 crc kubenswrapper[4871]: I0128 15:35:50.844701 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-d4p8j\" (UID: \"a1f1fd07-5c03-420c-bb27-e5ec2fece55b\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" Jan 28 15:35:50 crc kubenswrapper[4871]: I0128 15:35:50.844780 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-metrics-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-d4p8j\" (UID: \"a1f1fd07-5c03-420c-bb27-e5ec2fece55b\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" Jan 28 15:35:50 crc kubenswrapper[4871]: E0128 15:35:50.844937 4871 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:35:50 crc kubenswrapper[4871]: E0128 15:35:50.844983 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-metrics-certs podName:a1f1fd07-5c03-420c-bb27-e5ec2fece55b nodeName:}" failed. No retries permitted until 2026-01-28 15:35:58.844969326 +0000 UTC m=+1110.740807648 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-metrics-certs") pod "openstack-operator-controller-manager-57d89bf95c-d4p8j" (UID: "a1f1fd07-5c03-420c-bb27-e5ec2fece55b") : secret "metrics-server-cert" not found Jan 28 15:35:50 crc kubenswrapper[4871]: E0128 15:35:50.845293 4871 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:35:50 crc kubenswrapper[4871]: E0128 15:35:50.845329 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-webhook-certs podName:a1f1fd07-5c03-420c-bb27-e5ec2fece55b nodeName:}" failed. No retries permitted until 2026-01-28 15:35:58.845321667 +0000 UTC m=+1110.741159989 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-webhook-certs") pod "openstack-operator-controller-manager-57d89bf95c-d4p8j" (UID: "a1f1fd07-5c03-420c-bb27-e5ec2fece55b") : secret "webhook-server-cert" not found Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.266141 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-dz587" event={"ID":"31e7b45a-da4b-4920-895a-d51dba36168e","Type":"ContainerStarted","Data":"b485c54ee0f3e07f0933e2eaa64f098297bf1e8de2a627d33a7081cc70d04b89"} Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.266623 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-dz587" Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.267973 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-tb9cx" event={"ID":"4f068c70-d72a-4582-96e6-891b7269b1ba","Type":"ContainerStarted","Data":"250597c4a1e5c4cd5d96714791f394448836feee26f7d806045f39285665fc17"} Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.268102 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-tb9cx" Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.269075 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-svf5f" event={"ID":"806e885f-b6fc-4e7c-a81a-31bff54b7b06","Type":"ContainerStarted","Data":"ab3fd7b764d0d02fee489edd0d112ed177fe5f54ded959cb5ac95efe54822e7c"} Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.269190 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-svf5f" Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.270144 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-lrwg2" event={"ID":"2fa4f067-8eed-44b7-995a-5160ee0576c6","Type":"ContainerStarted","Data":"af5bad8049927a90ac576074037d4b1fab89e4952394ab864cb84129bb07c8b1"} Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.270473 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-lrwg2" Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.271791 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-7jmcc" event={"ID":"016e77e5-e2ea-4284-966f-16c5773febce","Type":"ContainerStarted","Data":"0d480b5c3adce59ea51da8e6ace6aa7f2cf40232316b09ba155365a2f7354199"} Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.271906 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-7jmcc" Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.272774 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4dvc8" event={"ID":"ad851357-ed69-4ed0-80a4-1de2b1725d37","Type":"ContainerStarted","Data":"d55824a95b8bbc1fef8c18098bb4beb34f53fd5067352d1accc7eb3a01f22e0c"} Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.273207 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4dvc8" Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.274181 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-958664b5-98l68" event={"ID":"b1b132cc-7a24-4a38-bf0b-6d26b36e551b","Type":"ContainerStarted","Data":"017f09c5c2bdcf31a072dd8a88116f969b26e683eccef28e0048418ad819f175"} Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.274513 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-958664b5-98l68" Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.275550 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hkk6l" event={"ID":"270211d8-fb57-4cb0-ba0b-9de5ae660e2e","Type":"ContainerStarted","Data":"fc69599746d343d68c63314863af4aced291e347f2697cbe87df1cb719a36619"} Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.275912 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hkk6l" Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.277382 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-vjjlf" event={"ID":"95211b62-9193-4fe4-b851-fe46793fac5b","Type":"ContainerStarted","Data":"dceeb68b89dd702aa197d110966d92e36bb30faa19d295e499502be8866484d0"} Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.277760 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-vjjlf" Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.279765 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-mvh4p" event={"ID":"41353a5b-bb79-45e7-8135-8229fa386ce4","Type":"ContainerStarted","Data":"bf893a698d379fbf7ef3736ef4f3012b69865419e6feb8ca2ebbfd9245d913f6"} Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.280161 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-mvh4p" Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.281110 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-cvkg8" event={"ID":"866e339c-74a0-47e8-aff9-3463890568a9","Type":"ContainerStarted","Data":"6b83ec6d04d9457df91f2294540b9736a505e5e728c98854e6a4b4a0e5b1b073"} Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.281465 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-cvkg8" Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.282451 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-l6xx2" event={"ID":"310a6fc1-965a-4af9-ab12-2c9b2f8046ff","Type":"ContainerStarted","Data":"fa11384a41bd9fcb0084f29bb949a70be438fe7c5f673ed0b09d74ec6ad48887"} Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.282809 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-l6xx2" Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.304948 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-dz587" podStartSLOduration=3.098769078 podStartE2EDuration="14.304931801s" podCreationTimestamp="2026-01-28 15:35:42 +0000 UTC" firstStartedPulling="2026-01-28 15:35:44.209258389 +0000 UTC m=+1096.105096711" lastFinishedPulling="2026-01-28 15:35:55.415421112 +0000 UTC m=+1107.311259434" observedRunningTime="2026-01-28 15:35:56.301057549 +0000 UTC m=+1108.196895871" watchObservedRunningTime="2026-01-28 15:35:56.304931801 +0000 UTC m=+1108.200770123" Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.339004 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-lrwg2" podStartSLOduration=3.148654483 podStartE2EDuration="14.338987966s" podCreationTimestamp="2026-01-28 15:35:42 +0000 UTC" firstStartedPulling="2026-01-28 15:35:44.225168962 +0000 UTC m=+1096.121007284" lastFinishedPulling="2026-01-28 15:35:55.415502445 +0000 UTC m=+1107.311340767" observedRunningTime="2026-01-28 15:35:56.334638108 +0000 UTC m=+1108.230476430" watchObservedRunningTime="2026-01-28 15:35:56.338987966 +0000 UTC m=+1108.234826288" Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.385715 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-cvkg8" podStartSLOduration=3.12131025 podStartE2EDuration="14.38569415s" podCreationTimestamp="2026-01-28 15:35:42 +0000 UTC" firstStartedPulling="2026-01-28 15:35:44.224232072 +0000 UTC m=+1096.120070394" lastFinishedPulling="2026-01-28 15:35:55.488615972 +0000 UTC m=+1107.384454294" observedRunningTime="2026-01-28 15:35:56.385336418 +0000 UTC m=+1108.281174740" watchObservedRunningTime="2026-01-28 15:35:56.38569415 +0000 UTC m=+1108.281532472" Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.425107 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-7jmcc" podStartSLOduration=3.136946684 podStartE2EDuration="14.425090193s" podCreationTimestamp="2026-01-28 15:35:42 +0000 UTC" firstStartedPulling="2026-01-28 15:35:44.227766524 +0000 UTC m=+1096.123604846" lastFinishedPulling="2026-01-28 15:35:55.515910033 +0000 UTC m=+1107.411748355" observedRunningTime="2026-01-28 15:35:56.422240103 +0000 UTC m=+1108.318078425" watchObservedRunningTime="2026-01-28 15:35:56.425090193 +0000 UTC m=+1108.320928505" Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.494492 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-l6xx2" podStartSLOduration=3.254967407 podStartE2EDuration="14.494478382s" podCreationTimestamp="2026-01-28 15:35:42 +0000 UTC" firstStartedPulling="2026-01-28 15:35:44.175943088 +0000 UTC m=+1096.071781410" lastFinishedPulling="2026-01-28 15:35:55.415454063 +0000 UTC m=+1107.311292385" observedRunningTime="2026-01-28 15:35:56.454318226 +0000 UTC m=+1108.350156548" watchObservedRunningTime="2026-01-28 15:35:56.494478382 +0000 UTC m=+1108.390316704" Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.541025 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-mvh4p" podStartSLOduration=3.264148088 podStartE2EDuration="14.541003371s" podCreationTimestamp="2026-01-28 15:35:42 +0000 UTC" firstStartedPulling="2026-01-28 15:35:44.139498448 +0000 UTC m=+1096.035336770" lastFinishedPulling="2026-01-28 15:35:55.416353731 +0000 UTC m=+1107.312192053" observedRunningTime="2026-01-28 15:35:56.498966635 +0000 UTC m=+1108.394804957" watchObservedRunningTime="2026-01-28 15:35:56.541003371 +0000 UTC m=+1108.436841693" Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.541940 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hkk6l" podStartSLOduration=2.761515466 podStartE2EDuration="14.54193358s" podCreationTimestamp="2026-01-28 15:35:42 +0000 UTC" firstStartedPulling="2026-01-28 15:35:43.698471121 +0000 UTC m=+1095.594309443" lastFinishedPulling="2026-01-28 15:35:55.478889225 +0000 UTC m=+1107.374727557" observedRunningTime="2026-01-28 15:35:56.540504845 +0000 UTC m=+1108.436343167" watchObservedRunningTime="2026-01-28 15:35:56.54193358 +0000 UTC m=+1108.437771902" Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.580139 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-svf5f" podStartSLOduration=3.242031281 podStartE2EDuration="14.580120776s" podCreationTimestamp="2026-01-28 15:35:42 +0000 UTC" firstStartedPulling="2026-01-28 15:35:44.178741176 +0000 UTC m=+1096.074579498" lastFinishedPulling="2026-01-28 15:35:55.516830661 +0000 UTC m=+1107.412668993" observedRunningTime="2026-01-28 15:35:56.577131931 +0000 UTC m=+1108.472970253" watchObservedRunningTime="2026-01-28 15:35:56.580120776 +0000 UTC m=+1108.475959098" Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.607493 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4dvc8" podStartSLOduration=3.357599957 podStartE2EDuration="14.607476909s" podCreationTimestamp="2026-01-28 15:35:42 +0000 UTC" firstStartedPulling="2026-01-28 15:35:44.204893212 +0000 UTC m=+1096.100731534" lastFinishedPulling="2026-01-28 15:35:55.454770164 +0000 UTC m=+1107.350608486" observedRunningTime="2026-01-28 15:35:56.605528837 +0000 UTC m=+1108.501367159" watchObservedRunningTime="2026-01-28 15:35:56.607476909 +0000 UTC m=+1108.503315231" Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.629736 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-tb9cx" podStartSLOduration=2.967958701 podStartE2EDuration="14.62972026s" podCreationTimestamp="2026-01-28 15:35:42 +0000 UTC" firstStartedPulling="2026-01-28 15:35:43.753727475 +0000 UTC m=+1095.649565797" lastFinishedPulling="2026-01-28 15:35:55.415489034 +0000 UTC m=+1107.311327356" observedRunningTime="2026-01-28 15:35:56.627096338 +0000 UTC m=+1108.522934670" watchObservedRunningTime="2026-01-28 15:35:56.62972026 +0000 UTC m=+1108.525558582" Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.654988 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-vjjlf" podStartSLOduration=2.841985047 podStartE2EDuration="14.654968477s" podCreationTimestamp="2026-01-28 15:35:42 +0000 UTC" firstStartedPulling="2026-01-28 15:35:43.677274893 +0000 UTC m=+1095.573113215" lastFinishedPulling="2026-01-28 15:35:55.490258323 +0000 UTC m=+1107.386096645" observedRunningTime="2026-01-28 15:35:56.650785995 +0000 UTC m=+1108.546624317" watchObservedRunningTime="2026-01-28 15:35:56.654968477 +0000 UTC m=+1108.550806799" Jan 28 15:35:56 crc kubenswrapper[4871]: I0128 15:35:56.682252 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-958664b5-98l68" podStartSLOduration=3.447886115 podStartE2EDuration="14.682232477s" podCreationTimestamp="2026-01-28 15:35:42 +0000 UTC" firstStartedPulling="2026-01-28 15:35:44.181158153 +0000 UTC m=+1096.076996485" lastFinishedPulling="2026-01-28 15:35:55.415504525 +0000 UTC m=+1107.311342847" observedRunningTime="2026-01-28 15:35:56.678872831 +0000 UTC m=+1108.574711153" watchObservedRunningTime="2026-01-28 15:35:56.682232477 +0000 UTC m=+1108.578070799" Jan 28 15:35:58 crc kubenswrapper[4871]: I0128 15:35:58.386431 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0-cert\") pod \"infra-operator-controller-manager-79955696d6-vnt6l\" (UID: \"91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vnt6l" Jan 28 15:35:58 crc kubenswrapper[4871]: I0128 15:35:58.394559 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0-cert\") pod \"infra-operator-controller-manager-79955696d6-vnt6l\" (UID: \"91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vnt6l" Jan 28 15:35:58 crc kubenswrapper[4871]: I0128 15:35:58.589245 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a19393cd-d011-4387-9a34-07b67bd30d4e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g\" (UID: \"a19393cd-d011-4387-9a34-07b67bd30d4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g" Jan 28 15:35:58 crc kubenswrapper[4871]: I0128 15:35:58.597350 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a19393cd-d011-4387-9a34-07b67bd30d4e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g\" (UID: \"a19393cd-d011-4387-9a34-07b67bd30d4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g" Jan 28 15:35:58 crc kubenswrapper[4871]: I0128 15:35:58.670833 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vnt6l" Jan 28 15:35:58 crc kubenswrapper[4871]: I0128 15:35:58.742059 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g" Jan 28 15:35:58 crc kubenswrapper[4871]: I0128 15:35:58.892873 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-metrics-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-d4p8j\" (UID: \"a1f1fd07-5c03-420c-bb27-e5ec2fece55b\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" Jan 28 15:35:58 crc kubenswrapper[4871]: I0128 15:35:58.892977 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-d4p8j\" (UID: \"a1f1fd07-5c03-420c-bb27-e5ec2fece55b\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" Jan 28 15:35:58 crc kubenswrapper[4871]: E0128 15:35:58.893077 4871 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:35:58 crc kubenswrapper[4871]: E0128 15:35:58.893139 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-webhook-certs podName:a1f1fd07-5c03-420c-bb27-e5ec2fece55b nodeName:}" failed. No retries permitted until 2026-01-28 15:36:14.893122214 +0000 UTC m=+1126.788960536 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-webhook-certs") pod "openstack-operator-controller-manager-57d89bf95c-d4p8j" (UID: "a1f1fd07-5c03-420c-bb27-e5ec2fece55b") : secret "webhook-server-cert" not found Jan 28 15:35:58 crc kubenswrapper[4871]: I0128 15:35:58.896865 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-metrics-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-d4p8j\" (UID: \"a1f1fd07-5c03-420c-bb27-e5ec2fece55b\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" Jan 28 15:35:59 crc kubenswrapper[4871]: I0128 15:35:59.769670 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-vnt6l"] Jan 28 15:35:59 crc kubenswrapper[4871]: W0128 15:35:59.777885 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91e83f0f_a088_4d2f_a32b_b7aaf38fd6f0.slice/crio-507a380d2c116051e7b2deae1ea7ece812349e6bc1bb87bbf63a5d2c14c0f810 WatchSource:0}: Error finding container 507a380d2c116051e7b2deae1ea7ece812349e6bc1bb87bbf63a5d2c14c0f810: Status 404 returned error can't find the container with id 507a380d2c116051e7b2deae1ea7ece812349e6bc1bb87bbf63a5d2c14c0f810 Jan 28 15:35:59 crc kubenswrapper[4871]: I0128 15:35:59.916052 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g"] Jan 28 15:36:00 crc kubenswrapper[4871]: I0128 15:36:00.309341 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g" event={"ID":"a19393cd-d011-4387-9a34-07b67bd30d4e","Type":"ContainerStarted","Data":"5adfd80762629604dffcd7d0472d75d0d3de244fc2b73254023aef08c678aa3f"} Jan 28 15:36:00 crc kubenswrapper[4871]: I0128 15:36:00.311176 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vnt6l" event={"ID":"91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0","Type":"ContainerStarted","Data":"507a380d2c116051e7b2deae1ea7ece812349e6bc1bb87bbf63a5d2c14c0f810"} Jan 28 15:36:00 crc kubenswrapper[4871]: I0128 15:36:00.312632 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-qvbqw" event={"ID":"936d0985-4e97-40ed-b0c4-e0eb92d4372f","Type":"ContainerStarted","Data":"a27c08a73fa965eac8d22708adaef873d11697f69032284f1b1ba5d5d8176b49"} Jan 28 15:36:00 crc kubenswrapper[4871]: I0128 15:36:00.312856 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-qvbqw" Jan 28 15:36:00 crc kubenswrapper[4871]: I0128 15:36:00.332459 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-qvbqw" podStartSLOduration=3.138558574 podStartE2EDuration="18.332433553s" podCreationTimestamp="2026-01-28 15:35:42 +0000 UTC" firstStartedPulling="2026-01-28 15:35:44.260876168 +0000 UTC m=+1096.156714490" lastFinishedPulling="2026-01-28 15:35:59.454751147 +0000 UTC m=+1111.350589469" observedRunningTime="2026-01-28 15:36:00.331743222 +0000 UTC m=+1112.227581544" watchObservedRunningTime="2026-01-28 15:36:00.332433553 +0000 UTC m=+1112.228271905" Jan 28 15:36:02 crc kubenswrapper[4871]: I0128 15:36:02.619936 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hkk6l" Jan 28 15:36:02 crc kubenswrapper[4871]: I0128 15:36:02.645117 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-tb9cx" Jan 28 15:36:02 crc kubenswrapper[4871]: I0128 15:36:02.679240 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-vjjlf" Jan 28 15:36:02 crc kubenswrapper[4871]: I0128 15:36:02.763530 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-mvh4p" Jan 28 15:36:02 crc kubenswrapper[4871]: I0128 15:36:02.764954 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-l6xx2" Jan 28 15:36:02 crc kubenswrapper[4871]: I0128 15:36:02.813113 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-958664b5-98l68" Jan 28 15:36:02 crc kubenswrapper[4871]: I0128 15:36:02.841441 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-7jmcc" Jan 28 15:36:02 crc kubenswrapper[4871]: I0128 15:36:02.934932 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-cvkg8" Jan 28 15:36:02 crc kubenswrapper[4871]: I0128 15:36:02.971648 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-svf5f" Jan 28 15:36:03 crc kubenswrapper[4871]: I0128 15:36:03.005708 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-dz587" Jan 28 15:36:03 crc kubenswrapper[4871]: I0128 15:36:03.111049 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4dvc8" Jan 28 15:36:03 crc kubenswrapper[4871]: I0128 15:36:03.123059 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-lrwg2" Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.371469 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vnt6l" event={"ID":"91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0","Type":"ContainerStarted","Data":"95eca7f557f02d2212a87a1cfa389dfae100abc44af3b7eef70bff4bc2cbf9c0"} Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.372778 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vnt6l" Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.374052 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-kh8pk" event={"ID":"a7b94f34-87cf-4992-9480-4019281227c4","Type":"ContainerStarted","Data":"3211a3f84bc423e470a2a65b43bb74280545ef760c4d17874214c7274117f8e9"} Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.374302 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-kh8pk" Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.375443 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-765668569f-4ncq7" event={"ID":"96fd0c1b-c934-4481-b198-38ca2cb9d187","Type":"ContainerStarted","Data":"ac0a16e6150d97c612521d3ff42b06712378b0fb8dc9c049b14001845e8b30ee"} Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.375629 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-765668569f-4ncq7" Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.376751 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g" event={"ID":"a19393cd-d011-4387-9a34-07b67bd30d4e","Type":"ContainerStarted","Data":"cae27186fefb49325bd1761ad618553c1212690a66d3ef0c76e4602ee5cf3d8a"} Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.376887 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g" Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.378111 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d62vq" event={"ID":"8552dce5-130b-4598-8101-89ea1c19dc3a","Type":"ContainerStarted","Data":"ff1928eea18eeca30a561f1de547c10cb608ac96a3ab0ecc9c33c4cce27e07d1"} Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.378804 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d62vq" Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.380304 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-rqmz6" event={"ID":"22008ca1-ed64-4a04-b45a-c9808ad68773","Type":"ContainerStarted","Data":"c0d9f8493d9570039ac42182de6c084858fc5c92b8f3e9fe3a9bda47344558a7"} Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.380552 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-rqmz6" Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.381646 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-ngtrg" event={"ID":"74c0f096-51ac-459a-b9f2-a7cb7f462734","Type":"ContainerStarted","Data":"73744210eafe2fcb21b5fec84f0cb72efb7b18cf998630cbb86d644b93be5ea5"} Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.381845 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-ngtrg" Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.383557 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nrtm8" event={"ID":"1441eab7-88f7-4278-b61e-15822bf73aca","Type":"ContainerStarted","Data":"27bbb76a1adf348d26a58780ead9819881e4577a1f5e0b384b1c9f38328e8688"} Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.383842 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nrtm8" Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.384857 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bg5kq" event={"ID":"3481a933-7882-4030-b852-9eb2f9f89b88","Type":"ContainerStarted","Data":"285664f20a0b2692aac5211faaa5f521569023137070018d6d44707de8fd7943"} Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.391527 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vnt6l" podStartSLOduration=19.089151641 podStartE2EDuration="27.391505462s" podCreationTimestamp="2026-01-28 15:35:42 +0000 UTC" firstStartedPulling="2026-01-28 15:35:59.780443454 +0000 UTC m=+1111.676281776" lastFinishedPulling="2026-01-28 15:36:08.082797265 +0000 UTC m=+1119.978635597" observedRunningTime="2026-01-28 15:36:09.388523439 +0000 UTC m=+1121.284361771" watchObservedRunningTime="2026-01-28 15:36:09.391505462 +0000 UTC m=+1121.287343784" Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.407582 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d62vq" podStartSLOduration=4.014207666 podStartE2EDuration="27.407559559s" podCreationTimestamp="2026-01-28 15:35:42 +0000 UTC" firstStartedPulling="2026-01-28 15:35:44.257553233 +0000 UTC m=+1096.153391555" lastFinishedPulling="2026-01-28 15:36:07.650905116 +0000 UTC m=+1119.546743448" observedRunningTime="2026-01-28 15:36:09.405874406 +0000 UTC m=+1121.301712748" watchObservedRunningTime="2026-01-28 15:36:09.407559559 +0000 UTC m=+1121.303397881" Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.432668 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-kh8pk" podStartSLOduration=3.558441704 podStartE2EDuration="27.432636621s" podCreationTimestamp="2026-01-28 15:35:42 +0000 UTC" firstStartedPulling="2026-01-28 15:35:44.248856809 +0000 UTC m=+1096.144695131" lastFinishedPulling="2026-01-28 15:36:08.123051726 +0000 UTC m=+1120.018890048" observedRunningTime="2026-01-28 15:36:09.420243659 +0000 UTC m=+1121.316081981" watchObservedRunningTime="2026-01-28 15:36:09.432636621 +0000 UTC m=+1121.328474943" Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.448363 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bg5kq" podStartSLOduration=2.534280392 podStartE2EDuration="26.448345456s" podCreationTimestamp="2026-01-28 15:35:43 +0000 UTC" firstStartedPulling="2026-01-28 15:35:44.261210209 +0000 UTC m=+1096.157048531" lastFinishedPulling="2026-01-28 15:36:08.175275273 +0000 UTC m=+1120.071113595" observedRunningTime="2026-01-28 15:36:09.439471816 +0000 UTC m=+1121.335310138" watchObservedRunningTime="2026-01-28 15:36:09.448345456 +0000 UTC m=+1121.344183778" Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.495044 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g" podStartSLOduration=19.772990562 podStartE2EDuration="27.495020929s" podCreationTimestamp="2026-01-28 15:35:42 +0000 UTC" firstStartedPulling="2026-01-28 15:35:59.929672434 +0000 UTC m=+1111.825510756" lastFinishedPulling="2026-01-28 15:36:07.651702811 +0000 UTC m=+1119.547541123" observedRunningTime="2026-01-28 15:36:09.485784638 +0000 UTC m=+1121.381622980" watchObservedRunningTime="2026-01-28 15:36:09.495020929 +0000 UTC m=+1121.390859251" Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.501216 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-ngtrg" podStartSLOduration=4.099711265 podStartE2EDuration="27.501198264s" podCreationTimestamp="2026-01-28 15:35:42 +0000 UTC" firstStartedPulling="2026-01-28 15:35:44.249422117 +0000 UTC m=+1096.145260439" lastFinishedPulling="2026-01-28 15:36:07.650909106 +0000 UTC m=+1119.546747438" observedRunningTime="2026-01-28 15:36:09.500073069 +0000 UTC m=+1121.395911411" watchObservedRunningTime="2026-01-28 15:36:09.501198264 +0000 UTC m=+1121.397036586" Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.515234 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-rqmz6" podStartSLOduration=4.124140095 podStartE2EDuration="27.515217246s" podCreationTimestamp="2026-01-28 15:35:42 +0000 UTC" firstStartedPulling="2026-01-28 15:35:44.260670992 +0000 UTC m=+1096.156509314" lastFinishedPulling="2026-01-28 15:36:07.651748133 +0000 UTC m=+1119.547586465" observedRunningTime="2026-01-28 15:36:09.512391027 +0000 UTC m=+1121.408229339" watchObservedRunningTime="2026-01-28 15:36:09.515217246 +0000 UTC m=+1121.411055568" Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.547215 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nrtm8" podStartSLOduration=3.675942101 podStartE2EDuration="27.547189815s" podCreationTimestamp="2026-01-28 15:35:42 +0000 UTC" firstStartedPulling="2026-01-28 15:35:44.251219363 +0000 UTC m=+1096.147057685" lastFinishedPulling="2026-01-28 15:36:08.122467057 +0000 UTC m=+1120.018305399" observedRunningTime="2026-01-28 15:36:09.539836493 +0000 UTC m=+1121.435674825" watchObservedRunningTime="2026-01-28 15:36:09.547189815 +0000 UTC m=+1121.443028137" Jan 28 15:36:09 crc kubenswrapper[4871]: I0128 15:36:09.564153 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-765668569f-4ncq7" podStartSLOduration=4.170837819 podStartE2EDuration="27.56413008s" podCreationTimestamp="2026-01-28 15:35:42 +0000 UTC" firstStartedPulling="2026-01-28 15:35:44.257619685 +0000 UTC m=+1096.153458007" lastFinishedPulling="2026-01-28 15:36:07.650911906 +0000 UTC m=+1119.546750268" observedRunningTime="2026-01-28 15:36:09.559618507 +0000 UTC m=+1121.455456829" watchObservedRunningTime="2026-01-28 15:36:09.56413008 +0000 UTC m=+1121.459968402" Jan 28 15:36:13 crc kubenswrapper[4871]: I0128 15:36:13.264314 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nrtm8" Jan 28 15:36:13 crc kubenswrapper[4871]: I0128 15:36:13.313245 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-qvbqw" Jan 28 15:36:13 crc kubenswrapper[4871]: I0128 15:36:13.324317 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d62vq" Jan 28 15:36:13 crc kubenswrapper[4871]: I0128 15:36:13.355977 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-rqmz6" Jan 28 15:36:14 crc kubenswrapper[4871]: I0128 15:36:14.943027 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-d4p8j\" (UID: \"a1f1fd07-5c03-420c-bb27-e5ec2fece55b\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" Jan 28 15:36:14 crc kubenswrapper[4871]: I0128 15:36:14.949315 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a1f1fd07-5c03-420c-bb27-e5ec2fece55b-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-d4p8j\" (UID: \"a1f1fd07-5c03-420c-bb27-e5ec2fece55b\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" Jan 28 15:36:15 crc kubenswrapper[4871]: I0128 15:36:15.173647 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" Jan 28 15:36:15 crc kubenswrapper[4871]: I0128 15:36:15.591563 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j"] Jan 28 15:36:16 crc kubenswrapper[4871]: I0128 15:36:16.431899 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" event={"ID":"a1f1fd07-5c03-420c-bb27-e5ec2fece55b","Type":"ContainerStarted","Data":"55f089694c1b8bf17e8a3a88ccef9ec541302d740c0b4e814bea9c845d6dd048"} Jan 28 15:36:18 crc kubenswrapper[4871]: I0128 15:36:18.678845 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vnt6l" Jan 28 15:36:18 crc kubenswrapper[4871]: I0128 15:36:18.762642 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g" Jan 28 15:36:22 crc kubenswrapper[4871]: I0128 15:36:22.480665 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" event={"ID":"a1f1fd07-5c03-420c-bb27-e5ec2fece55b","Type":"ContainerStarted","Data":"e2dca93ac54d007d696b25d5b9b20fc18af8f383e1eacbc2d67a12923d93a9c3"} Jan 28 15:36:22 crc kubenswrapper[4871]: I0128 15:36:22.481060 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" Jan 28 15:36:22 crc kubenswrapper[4871]: I0128 15:36:22.511906 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" podStartSLOduration=40.511838276 podStartE2EDuration="40.511838276s" podCreationTimestamp="2026-01-28 15:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:36:22.502450499 +0000 UTC m=+1134.398288821" watchObservedRunningTime="2026-01-28 15:36:22.511838276 +0000 UTC m=+1134.407676638" Jan 28 15:36:22 crc kubenswrapper[4871]: I0128 15:36:22.709367 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-ngtrg" Jan 28 15:36:22 crc kubenswrapper[4871]: I0128 15:36:22.914599 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-765668569f-4ncq7" Jan 28 15:36:22 crc kubenswrapper[4871]: I0128 15:36:22.936142 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-kh8pk" Jan 28 15:36:35 crc kubenswrapper[4871]: I0128 15:36:35.183712 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-d4p8j" Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.434408 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tq5mb"] Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.437507 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tq5mb" Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.443108 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.443254 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-7mxnx" Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.447129 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.447193 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.451322 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tq5mb"] Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.471991 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-lps7c"] Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.473226 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-lps7c" Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.477545 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.493857 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-lps7c"] Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.539167 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4wqk\" (UniqueName: \"kubernetes.io/projected/03215699-61b8-4b69-9fe4-fbee746d5f61-kube-api-access-v4wqk\") pod \"dnsmasq-dns-675f4bcbfc-tq5mb\" (UID: \"03215699-61b8-4b69-9fe4-fbee746d5f61\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tq5mb" Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.539218 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86d7b0fe-3e9f-433d-907a-1d9714dd4d1a-config\") pod \"dnsmasq-dns-78dd6ddcc-lps7c\" (UID: \"86d7b0fe-3e9f-433d-907a-1d9714dd4d1a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lps7c" Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.539272 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sx8r\" (UniqueName: \"kubernetes.io/projected/86d7b0fe-3e9f-433d-907a-1d9714dd4d1a-kube-api-access-5sx8r\") pod \"dnsmasq-dns-78dd6ddcc-lps7c\" (UID: \"86d7b0fe-3e9f-433d-907a-1d9714dd4d1a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lps7c" Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.539290 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03215699-61b8-4b69-9fe4-fbee746d5f61-config\") pod \"dnsmasq-dns-675f4bcbfc-tq5mb\" (UID: \"03215699-61b8-4b69-9fe4-fbee746d5f61\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tq5mb" Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.539332 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86d7b0fe-3e9f-433d-907a-1d9714dd4d1a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-lps7c\" (UID: \"86d7b0fe-3e9f-433d-907a-1d9714dd4d1a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lps7c" Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.640765 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sx8r\" (UniqueName: \"kubernetes.io/projected/86d7b0fe-3e9f-433d-907a-1d9714dd4d1a-kube-api-access-5sx8r\") pod \"dnsmasq-dns-78dd6ddcc-lps7c\" (UID: \"86d7b0fe-3e9f-433d-907a-1d9714dd4d1a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lps7c" Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.640814 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03215699-61b8-4b69-9fe4-fbee746d5f61-config\") pod \"dnsmasq-dns-675f4bcbfc-tq5mb\" (UID: \"03215699-61b8-4b69-9fe4-fbee746d5f61\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tq5mb" Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.640864 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86d7b0fe-3e9f-433d-907a-1d9714dd4d1a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-lps7c\" (UID: \"86d7b0fe-3e9f-433d-907a-1d9714dd4d1a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lps7c" Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.640897 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4wqk\" (UniqueName: \"kubernetes.io/projected/03215699-61b8-4b69-9fe4-fbee746d5f61-kube-api-access-v4wqk\") pod \"dnsmasq-dns-675f4bcbfc-tq5mb\" (UID: \"03215699-61b8-4b69-9fe4-fbee746d5f61\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tq5mb" Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.640926 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86d7b0fe-3e9f-433d-907a-1d9714dd4d1a-config\") pod \"dnsmasq-dns-78dd6ddcc-lps7c\" (UID: \"86d7b0fe-3e9f-433d-907a-1d9714dd4d1a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lps7c" Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.641704 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86d7b0fe-3e9f-433d-907a-1d9714dd4d1a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-lps7c\" (UID: \"86d7b0fe-3e9f-433d-907a-1d9714dd4d1a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lps7c" Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.642072 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86d7b0fe-3e9f-433d-907a-1d9714dd4d1a-config\") pod \"dnsmasq-dns-78dd6ddcc-lps7c\" (UID: \"86d7b0fe-3e9f-433d-907a-1d9714dd4d1a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lps7c" Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.642969 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03215699-61b8-4b69-9fe4-fbee746d5f61-config\") pod \"dnsmasq-dns-675f4bcbfc-tq5mb\" (UID: \"03215699-61b8-4b69-9fe4-fbee746d5f61\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tq5mb" Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.657802 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4wqk\" (UniqueName: \"kubernetes.io/projected/03215699-61b8-4b69-9fe4-fbee746d5f61-kube-api-access-v4wqk\") pod \"dnsmasq-dns-675f4bcbfc-tq5mb\" (UID: \"03215699-61b8-4b69-9fe4-fbee746d5f61\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tq5mb" Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.658138 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sx8r\" (UniqueName: \"kubernetes.io/projected/86d7b0fe-3e9f-433d-907a-1d9714dd4d1a-kube-api-access-5sx8r\") pod \"dnsmasq-dns-78dd6ddcc-lps7c\" (UID: \"86d7b0fe-3e9f-433d-907a-1d9714dd4d1a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lps7c" Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.755784 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tq5mb" Jan 28 15:36:48 crc kubenswrapper[4871]: I0128 15:36:48.792367 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-lps7c" Jan 28 15:36:49 crc kubenswrapper[4871]: I0128 15:36:49.181568 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tq5mb"] Jan 28 15:36:49 crc kubenswrapper[4871]: I0128 15:36:49.265339 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-lps7c"] Jan 28 15:36:49 crc kubenswrapper[4871]: W0128 15:36:49.266027 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86d7b0fe_3e9f_433d_907a_1d9714dd4d1a.slice/crio-eb3a523e637557a0d2a3fe0fb96e87e75c434d470b75d59ef8e1dcc8d9038685 WatchSource:0}: Error finding container eb3a523e637557a0d2a3fe0fb96e87e75c434d470b75d59ef8e1dcc8d9038685: Status 404 returned error can't find the container with id eb3a523e637557a0d2a3fe0fb96e87e75c434d470b75d59ef8e1dcc8d9038685 Jan 28 15:36:49 crc kubenswrapper[4871]: I0128 15:36:49.718835 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-tq5mb" event={"ID":"03215699-61b8-4b69-9fe4-fbee746d5f61","Type":"ContainerStarted","Data":"ee55409790b8c09363e1dc54527b6204f6ef176b528a46b41b70164301ebb2cf"} Jan 28 15:36:49 crc kubenswrapper[4871]: I0128 15:36:49.720120 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-lps7c" event={"ID":"86d7b0fe-3e9f-433d-907a-1d9714dd4d1a","Type":"ContainerStarted","Data":"eb3a523e637557a0d2a3fe0fb96e87e75c434d470b75d59ef8e1dcc8d9038685"} Jan 28 15:36:51 crc kubenswrapper[4871]: I0128 15:36:51.510624 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tq5mb"] Jan 28 15:36:51 crc kubenswrapper[4871]: I0128 15:36:51.538750 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-69txw"] Jan 28 15:36:51 crc kubenswrapper[4871]: I0128 15:36:51.540076 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-69txw" Jan 28 15:36:51 crc kubenswrapper[4871]: I0128 15:36:51.558512 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-69txw"] Jan 28 15:36:51 crc kubenswrapper[4871]: I0128 15:36:51.596122 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f1a59c5-9648-4b50-9131-83a436f5e6cf-config\") pod \"dnsmasq-dns-666b6646f7-69txw\" (UID: \"8f1a59c5-9648-4b50-9131-83a436f5e6cf\") " pod="openstack/dnsmasq-dns-666b6646f7-69txw" Jan 28 15:36:51 crc kubenswrapper[4871]: I0128 15:36:51.596387 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgxqx\" (UniqueName: \"kubernetes.io/projected/8f1a59c5-9648-4b50-9131-83a436f5e6cf-kube-api-access-wgxqx\") pod \"dnsmasq-dns-666b6646f7-69txw\" (UID: \"8f1a59c5-9648-4b50-9131-83a436f5e6cf\") " pod="openstack/dnsmasq-dns-666b6646f7-69txw" Jan 28 15:36:51 crc kubenswrapper[4871]: I0128 15:36:51.596559 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f1a59c5-9648-4b50-9131-83a436f5e6cf-dns-svc\") pod \"dnsmasq-dns-666b6646f7-69txw\" (UID: \"8f1a59c5-9648-4b50-9131-83a436f5e6cf\") " pod="openstack/dnsmasq-dns-666b6646f7-69txw" Jan 28 15:36:51 crc kubenswrapper[4871]: I0128 15:36:51.703933 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f1a59c5-9648-4b50-9131-83a436f5e6cf-config\") pod \"dnsmasq-dns-666b6646f7-69txw\" (UID: \"8f1a59c5-9648-4b50-9131-83a436f5e6cf\") " pod="openstack/dnsmasq-dns-666b6646f7-69txw" Jan 28 15:36:51 crc kubenswrapper[4871]: I0128 15:36:51.703995 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgxqx\" (UniqueName: \"kubernetes.io/projected/8f1a59c5-9648-4b50-9131-83a436f5e6cf-kube-api-access-wgxqx\") pod \"dnsmasq-dns-666b6646f7-69txw\" (UID: \"8f1a59c5-9648-4b50-9131-83a436f5e6cf\") " pod="openstack/dnsmasq-dns-666b6646f7-69txw" Jan 28 15:36:51 crc kubenswrapper[4871]: I0128 15:36:51.704092 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f1a59c5-9648-4b50-9131-83a436f5e6cf-dns-svc\") pod \"dnsmasq-dns-666b6646f7-69txw\" (UID: \"8f1a59c5-9648-4b50-9131-83a436f5e6cf\") " pod="openstack/dnsmasq-dns-666b6646f7-69txw" Jan 28 15:36:51 crc kubenswrapper[4871]: I0128 15:36:51.704913 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f1a59c5-9648-4b50-9131-83a436f5e6cf-dns-svc\") pod \"dnsmasq-dns-666b6646f7-69txw\" (UID: \"8f1a59c5-9648-4b50-9131-83a436f5e6cf\") " pod="openstack/dnsmasq-dns-666b6646f7-69txw" Jan 28 15:36:51 crc kubenswrapper[4871]: I0128 15:36:51.717027 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f1a59c5-9648-4b50-9131-83a436f5e6cf-config\") pod \"dnsmasq-dns-666b6646f7-69txw\" (UID: \"8f1a59c5-9648-4b50-9131-83a436f5e6cf\") " pod="openstack/dnsmasq-dns-666b6646f7-69txw" Jan 28 15:36:51 crc kubenswrapper[4871]: I0128 15:36:51.736261 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgxqx\" (UniqueName: \"kubernetes.io/projected/8f1a59c5-9648-4b50-9131-83a436f5e6cf-kube-api-access-wgxqx\") pod \"dnsmasq-dns-666b6646f7-69txw\" (UID: \"8f1a59c5-9648-4b50-9131-83a436f5e6cf\") " pod="openstack/dnsmasq-dns-666b6646f7-69txw" Jan 28 15:36:51 crc kubenswrapper[4871]: I0128 15:36:51.796836 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-lps7c"] Jan 28 15:36:51 crc kubenswrapper[4871]: I0128 15:36:51.819716 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vp4mw"] Jan 28 15:36:51 crc kubenswrapper[4871]: I0128 15:36:51.824065 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vp4mw" Jan 28 15:36:51 crc kubenswrapper[4871]: I0128 15:36:51.829950 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vp4mw"] Jan 28 15:36:51 crc kubenswrapper[4871]: I0128 15:36:51.890003 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-69txw" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.012464 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb7bc661-06ed-4edf-afd4-0d8b48c8b738-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-vp4mw\" (UID: \"eb7bc661-06ed-4edf-afd4-0d8b48c8b738\") " pod="openstack/dnsmasq-dns-57d769cc4f-vp4mw" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.013160 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbtb6\" (UniqueName: \"kubernetes.io/projected/eb7bc661-06ed-4edf-afd4-0d8b48c8b738-kube-api-access-wbtb6\") pod \"dnsmasq-dns-57d769cc4f-vp4mw\" (UID: \"eb7bc661-06ed-4edf-afd4-0d8b48c8b738\") " pod="openstack/dnsmasq-dns-57d769cc4f-vp4mw" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.013309 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb7bc661-06ed-4edf-afd4-0d8b48c8b738-config\") pod \"dnsmasq-dns-57d769cc4f-vp4mw\" (UID: \"eb7bc661-06ed-4edf-afd4-0d8b48c8b738\") " pod="openstack/dnsmasq-dns-57d769cc4f-vp4mw" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.116373 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbtb6\" (UniqueName: \"kubernetes.io/projected/eb7bc661-06ed-4edf-afd4-0d8b48c8b738-kube-api-access-wbtb6\") pod \"dnsmasq-dns-57d769cc4f-vp4mw\" (UID: \"eb7bc661-06ed-4edf-afd4-0d8b48c8b738\") " pod="openstack/dnsmasq-dns-57d769cc4f-vp4mw" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.116860 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb7bc661-06ed-4edf-afd4-0d8b48c8b738-config\") pod \"dnsmasq-dns-57d769cc4f-vp4mw\" (UID: \"eb7bc661-06ed-4edf-afd4-0d8b48c8b738\") " pod="openstack/dnsmasq-dns-57d769cc4f-vp4mw" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.117254 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb7bc661-06ed-4edf-afd4-0d8b48c8b738-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-vp4mw\" (UID: \"eb7bc661-06ed-4edf-afd4-0d8b48c8b738\") " pod="openstack/dnsmasq-dns-57d769cc4f-vp4mw" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.118893 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb7bc661-06ed-4edf-afd4-0d8b48c8b738-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-vp4mw\" (UID: \"eb7bc661-06ed-4edf-afd4-0d8b48c8b738\") " pod="openstack/dnsmasq-dns-57d769cc4f-vp4mw" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.119141 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb7bc661-06ed-4edf-afd4-0d8b48c8b738-config\") pod \"dnsmasq-dns-57d769cc4f-vp4mw\" (UID: \"eb7bc661-06ed-4edf-afd4-0d8b48c8b738\") " pod="openstack/dnsmasq-dns-57d769cc4f-vp4mw" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.158690 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbtb6\" (UniqueName: \"kubernetes.io/projected/eb7bc661-06ed-4edf-afd4-0d8b48c8b738-kube-api-access-wbtb6\") pod \"dnsmasq-dns-57d769cc4f-vp4mw\" (UID: \"eb7bc661-06ed-4edf-afd4-0d8b48c8b738\") " pod="openstack/dnsmasq-dns-57d769cc4f-vp4mw" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.227514 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vp4mw" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.476990 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-69txw"] Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.492195 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vp4mw"] Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.681260 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.683708 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.691513 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.691863 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.692142 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-pkzj8" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.692313 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.692520 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.692666 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.692773 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.698423 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.830790 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.830889 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.830951 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.830978 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.831001 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.831213 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwztz\" (UniqueName: \"kubernetes.io/projected/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-kube-api-access-mwztz\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.831232 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.831344 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-config-data\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.831394 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.831438 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.834664 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.936202 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-config-data\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.936788 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.936818 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.936851 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.936903 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.936932 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.936966 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.936987 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.937009 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.937030 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwztz\" (UniqueName: \"kubernetes.io/projected/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-kube-api-access-mwztz\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.937048 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.937263 4871 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.938383 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.938527 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.938660 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.939000 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.940386 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-config-data\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.944717 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.944825 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.946208 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.950566 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:52 crc kubenswrapper[4871]: I0128 15:36:52.957684 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwztz\" (UniqueName: \"kubernetes.io/projected/1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2-kube-api-access-mwztz\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:52.999977 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.004965 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.008192 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.010081 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.010213 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.010359 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.010501 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.010718 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-wppwb" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.010839 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.018668 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.019902 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.028022 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.141827 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncpwj\" (UniqueName: \"kubernetes.io/projected/48f16980-86d0-4648-9ebd-a428b5253832-kube-api-access-ncpwj\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.141904 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/48f16980-86d0-4648-9ebd-a428b5253832-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.141993 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/48f16980-86d0-4648-9ebd-a428b5253832-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.142020 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48f16980-86d0-4648-9ebd-a428b5253832-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.142143 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.142430 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/48f16980-86d0-4648-9ebd-a428b5253832-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.142516 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/48f16980-86d0-4648-9ebd-a428b5253832-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.142639 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/48f16980-86d0-4648-9ebd-a428b5253832-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.142736 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/48f16980-86d0-4648-9ebd-a428b5253832-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.142808 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/48f16980-86d0-4648-9ebd-a428b5253832-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.142968 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/48f16980-86d0-4648-9ebd-a428b5253832-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.244486 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48f16980-86d0-4648-9ebd-a428b5253832-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.244727 4871 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.245518 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48f16980-86d0-4648-9ebd-a428b5253832-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.246667 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.246783 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/48f16980-86d0-4648-9ebd-a428b5253832-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.246826 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/48f16980-86d0-4648-9ebd-a428b5253832-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.246891 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/48f16980-86d0-4648-9ebd-a428b5253832-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.246948 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/48f16980-86d0-4648-9ebd-a428b5253832-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.246984 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/48f16980-86d0-4648-9ebd-a428b5253832-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.247069 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/48f16980-86d0-4648-9ebd-a428b5253832-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.247096 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncpwj\" (UniqueName: \"kubernetes.io/projected/48f16980-86d0-4648-9ebd-a428b5253832-kube-api-access-ncpwj\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.247135 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/48f16980-86d0-4648-9ebd-a428b5253832-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.247158 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/48f16980-86d0-4648-9ebd-a428b5253832-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.247561 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/48f16980-86d0-4648-9ebd-a428b5253832-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.248254 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/48f16980-86d0-4648-9ebd-a428b5253832-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.248625 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/48f16980-86d0-4648-9ebd-a428b5253832-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.248731 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/48f16980-86d0-4648-9ebd-a428b5253832-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.252163 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/48f16980-86d0-4648-9ebd-a428b5253832-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.252325 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/48f16980-86d0-4648-9ebd-a428b5253832-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.252423 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/48f16980-86d0-4648-9ebd-a428b5253832-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.253040 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/48f16980-86d0-4648-9ebd-a428b5253832-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.271952 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncpwj\" (UniqueName: \"kubernetes.io/projected/48f16980-86d0-4648-9ebd-a428b5253832-kube-api-access-ncpwj\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.273136 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"48f16980-86d0-4648-9ebd-a428b5253832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:53 crc kubenswrapper[4871]: I0128 15:36:53.423206 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.188339 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.189780 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.192311 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.196913 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-smplb" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.197514 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.198109 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.202334 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.206135 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.365022 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7241d8aa-248e-46a8-88af-365415f843f8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.365109 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.365184 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq859\" (UniqueName: \"kubernetes.io/projected/7241d8aa-248e-46a8-88af-365415f843f8-kube-api-access-sq859\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.365211 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7241d8aa-248e-46a8-88af-365415f843f8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.365246 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7241d8aa-248e-46a8-88af-365415f843f8-config-data-default\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.365286 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7241d8aa-248e-46a8-88af-365415f843f8-kolla-config\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.365328 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7241d8aa-248e-46a8-88af-365415f843f8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.365358 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7241d8aa-248e-46a8-88af-365415f843f8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.466221 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7241d8aa-248e-46a8-88af-365415f843f8-kolla-config\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.466286 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7241d8aa-248e-46a8-88af-365415f843f8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.466319 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7241d8aa-248e-46a8-88af-365415f843f8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.466354 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7241d8aa-248e-46a8-88af-365415f843f8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.466394 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.466439 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq859\" (UniqueName: \"kubernetes.io/projected/7241d8aa-248e-46a8-88af-365415f843f8-kube-api-access-sq859\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.466454 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7241d8aa-248e-46a8-88af-365415f843f8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.466476 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7241d8aa-248e-46a8-88af-365415f843f8-config-data-default\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.467014 4871 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.467398 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7241d8aa-248e-46a8-88af-365415f843f8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.467429 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7241d8aa-248e-46a8-88af-365415f843f8-kolla-config\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.468341 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7241d8aa-248e-46a8-88af-365415f843f8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.468476 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7241d8aa-248e-46a8-88af-365415f843f8-config-data-default\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.471141 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7241d8aa-248e-46a8-88af-365415f843f8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.476366 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7241d8aa-248e-46a8-88af-365415f843f8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.490293 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq859\" (UniqueName: \"kubernetes.io/projected/7241d8aa-248e-46a8-88af-365415f843f8-kube-api-access-sq859\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.492316 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"7241d8aa-248e-46a8-88af-365415f843f8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:54 crc kubenswrapper[4871]: I0128 15:36:54.509304 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.425764 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.427483 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.431391 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-cgmj8" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.431698 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.432222 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.432257 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.439371 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.580092 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlvrv\" (UniqueName: \"kubernetes.io/projected/634ee164-2990-4b2b-88e4-ce901728e251-kube-api-access-vlvrv\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.580157 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.580180 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/634ee164-2990-4b2b-88e4-ce901728e251-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.580201 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/634ee164-2990-4b2b-88e4-ce901728e251-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.580233 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/634ee164-2990-4b2b-88e4-ce901728e251-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.580358 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/634ee164-2990-4b2b-88e4-ce901728e251-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.580531 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/634ee164-2990-4b2b-88e4-ce901728e251-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.580578 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/634ee164-2990-4b2b-88e4-ce901728e251-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.682611 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.682673 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/634ee164-2990-4b2b-88e4-ce901728e251-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.682708 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/634ee164-2990-4b2b-88e4-ce901728e251-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.682752 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/634ee164-2990-4b2b-88e4-ce901728e251-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.682806 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/634ee164-2990-4b2b-88e4-ce901728e251-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.682850 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/634ee164-2990-4b2b-88e4-ce901728e251-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.682850 4871 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.682871 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/634ee164-2990-4b2b-88e4-ce901728e251-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.683348 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlvrv\" (UniqueName: \"kubernetes.io/projected/634ee164-2990-4b2b-88e4-ce901728e251-kube-api-access-vlvrv\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.683536 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/634ee164-2990-4b2b-88e4-ce901728e251-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.683787 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/634ee164-2990-4b2b-88e4-ce901728e251-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.684021 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/634ee164-2990-4b2b-88e4-ce901728e251-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.684624 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/634ee164-2990-4b2b-88e4-ce901728e251-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.688236 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/634ee164-2990-4b2b-88e4-ce901728e251-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.688289 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/634ee164-2990-4b2b-88e4-ce901728e251-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.702675 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.715142 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlvrv\" (UniqueName: \"kubernetes.io/projected/634ee164-2990-4b2b-88e4-ce901728e251-kube-api-access-vlvrv\") pod \"openstack-cell1-galera-0\" (UID: \"634ee164-2990-4b2b-88e4-ce901728e251\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.748259 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.820145 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.821313 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.823255 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.823698 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-t5bbt" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.824164 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.833578 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.988325 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/03065e0e-cdb6-49a2-bfe3-28236f770fdc-kolla-config\") pod \"memcached-0\" (UID: \"03065e0e-cdb6-49a2-bfe3-28236f770fdc\") " pod="openstack/memcached-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.988398 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/03065e0e-cdb6-49a2-bfe3-28236f770fdc-memcached-tls-certs\") pod \"memcached-0\" (UID: \"03065e0e-cdb6-49a2-bfe3-28236f770fdc\") " pod="openstack/memcached-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.988435 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dr6x\" (UniqueName: \"kubernetes.io/projected/03065e0e-cdb6-49a2-bfe3-28236f770fdc-kube-api-access-2dr6x\") pod \"memcached-0\" (UID: \"03065e0e-cdb6-49a2-bfe3-28236f770fdc\") " pod="openstack/memcached-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.988469 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03065e0e-cdb6-49a2-bfe3-28236f770fdc-config-data\") pod \"memcached-0\" (UID: \"03065e0e-cdb6-49a2-bfe3-28236f770fdc\") " pod="openstack/memcached-0" Jan 28 15:36:55 crc kubenswrapper[4871]: I0128 15:36:55.988715 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03065e0e-cdb6-49a2-bfe3-28236f770fdc-combined-ca-bundle\") pod \"memcached-0\" (UID: \"03065e0e-cdb6-49a2-bfe3-28236f770fdc\") " pod="openstack/memcached-0" Jan 28 15:36:56 crc kubenswrapper[4871]: I0128 15:36:56.090008 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03065e0e-cdb6-49a2-bfe3-28236f770fdc-combined-ca-bundle\") pod \"memcached-0\" (UID: \"03065e0e-cdb6-49a2-bfe3-28236f770fdc\") " pod="openstack/memcached-0" Jan 28 15:36:56 crc kubenswrapper[4871]: I0128 15:36:56.090082 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/03065e0e-cdb6-49a2-bfe3-28236f770fdc-kolla-config\") pod \"memcached-0\" (UID: \"03065e0e-cdb6-49a2-bfe3-28236f770fdc\") " pod="openstack/memcached-0" Jan 28 15:36:56 crc kubenswrapper[4871]: I0128 15:36:56.090119 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/03065e0e-cdb6-49a2-bfe3-28236f770fdc-memcached-tls-certs\") pod \"memcached-0\" (UID: \"03065e0e-cdb6-49a2-bfe3-28236f770fdc\") " pod="openstack/memcached-0" Jan 28 15:36:56 crc kubenswrapper[4871]: I0128 15:36:56.090145 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dr6x\" (UniqueName: \"kubernetes.io/projected/03065e0e-cdb6-49a2-bfe3-28236f770fdc-kube-api-access-2dr6x\") pod \"memcached-0\" (UID: \"03065e0e-cdb6-49a2-bfe3-28236f770fdc\") " pod="openstack/memcached-0" Jan 28 15:36:56 crc kubenswrapper[4871]: I0128 15:36:56.090173 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03065e0e-cdb6-49a2-bfe3-28236f770fdc-config-data\") pod \"memcached-0\" (UID: \"03065e0e-cdb6-49a2-bfe3-28236f770fdc\") " pod="openstack/memcached-0" Jan 28 15:36:56 crc kubenswrapper[4871]: I0128 15:36:56.091000 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03065e0e-cdb6-49a2-bfe3-28236f770fdc-config-data\") pod \"memcached-0\" (UID: \"03065e0e-cdb6-49a2-bfe3-28236f770fdc\") " pod="openstack/memcached-0" Jan 28 15:36:56 crc kubenswrapper[4871]: I0128 15:36:56.091323 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/03065e0e-cdb6-49a2-bfe3-28236f770fdc-kolla-config\") pod \"memcached-0\" (UID: \"03065e0e-cdb6-49a2-bfe3-28236f770fdc\") " pod="openstack/memcached-0" Jan 28 15:36:56 crc kubenswrapper[4871]: I0128 15:36:56.096480 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03065e0e-cdb6-49a2-bfe3-28236f770fdc-combined-ca-bundle\") pod \"memcached-0\" (UID: \"03065e0e-cdb6-49a2-bfe3-28236f770fdc\") " pod="openstack/memcached-0" Jan 28 15:36:56 crc kubenswrapper[4871]: I0128 15:36:56.106167 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/03065e0e-cdb6-49a2-bfe3-28236f770fdc-memcached-tls-certs\") pod \"memcached-0\" (UID: \"03065e0e-cdb6-49a2-bfe3-28236f770fdc\") " pod="openstack/memcached-0" Jan 28 15:36:56 crc kubenswrapper[4871]: I0128 15:36:56.110690 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dr6x\" (UniqueName: \"kubernetes.io/projected/03065e0e-cdb6-49a2-bfe3-28236f770fdc-kube-api-access-2dr6x\") pod \"memcached-0\" (UID: \"03065e0e-cdb6-49a2-bfe3-28236f770fdc\") " pod="openstack/memcached-0" Jan 28 15:36:56 crc kubenswrapper[4871]: I0128 15:36:56.141255 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 15:36:56 crc kubenswrapper[4871]: W0128 15:36:56.532747 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f1a59c5_9648_4b50_9131_83a436f5e6cf.slice/crio-de3065a1d1309770579012b63c52fb70e33ae62bf2bc03f0bb4477337aeddc14 WatchSource:0}: Error finding container de3065a1d1309770579012b63c52fb70e33ae62bf2bc03f0bb4477337aeddc14: Status 404 returned error can't find the container with id de3065a1d1309770579012b63c52fb70e33ae62bf2bc03f0bb4477337aeddc14 Jan 28 15:36:56 crc kubenswrapper[4871]: I0128 15:36:56.813547 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vp4mw" event={"ID":"eb7bc661-06ed-4edf-afd4-0d8b48c8b738","Type":"ContainerStarted","Data":"3c6658e9d4aa5abbeb5b1ba30c3c2a14d447b9384d6905db70caa4064d8a56ff"} Jan 28 15:36:56 crc kubenswrapper[4871]: I0128 15:36:56.814677 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-69txw" event={"ID":"8f1a59c5-9648-4b50-9131-83a436f5e6cf","Type":"ContainerStarted","Data":"de3065a1d1309770579012b63c52fb70e33ae62bf2bc03f0bb4477337aeddc14"} Jan 28 15:36:57 crc kubenswrapper[4871]: I0128 15:36:57.319320 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 15:36:57 crc kubenswrapper[4871]: I0128 15:36:57.320476 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 15:36:57 crc kubenswrapper[4871]: I0128 15:36:57.323499 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-nqt4w" Jan 28 15:36:57 crc kubenswrapper[4871]: I0128 15:36:57.330182 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 15:36:57 crc kubenswrapper[4871]: I0128 15:36:57.413803 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltftr\" (UniqueName: \"kubernetes.io/projected/24325972-e640-4b7b-b5c9-215dd8cd0fea-kube-api-access-ltftr\") pod \"kube-state-metrics-0\" (UID: \"24325972-e640-4b7b-b5c9-215dd8cd0fea\") " pod="openstack/kube-state-metrics-0" Jan 28 15:36:57 crc kubenswrapper[4871]: I0128 15:36:57.515221 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltftr\" (UniqueName: \"kubernetes.io/projected/24325972-e640-4b7b-b5c9-215dd8cd0fea-kube-api-access-ltftr\") pod \"kube-state-metrics-0\" (UID: \"24325972-e640-4b7b-b5c9-215dd8cd0fea\") " pod="openstack/kube-state-metrics-0" Jan 28 15:36:57 crc kubenswrapper[4871]: I0128 15:36:57.753565 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltftr\" (UniqueName: \"kubernetes.io/projected/24325972-e640-4b7b-b5c9-215dd8cd0fea-kube-api-access-ltftr\") pod \"kube-state-metrics-0\" (UID: \"24325972-e640-4b7b-b5c9-215dd8cd0fea\") " pod="openstack/kube-state-metrics-0" Jan 28 15:36:57 crc kubenswrapper[4871]: I0128 15:36:57.995231 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.288029 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-2s4s6"] Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.290453 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2s4s6" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.293708 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-svwn7" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.294403 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.295966 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2s4s6"] Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.307676 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.335110 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-c2xpq"] Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.337039 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-c2xpq" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.347116 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-c2xpq"] Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.374899 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da-scripts\") pod \"ovn-controller-ovs-c2xpq\" (UID: \"8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da\") " pod="openstack/ovn-controller-ovs-c2xpq" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.374950 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da-var-lib\") pod \"ovn-controller-ovs-c2xpq\" (UID: \"8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da\") " pod="openstack/ovn-controller-ovs-c2xpq" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.374979 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da-var-run\") pod \"ovn-controller-ovs-c2xpq\" (UID: \"8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da\") " pod="openstack/ovn-controller-ovs-c2xpq" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.375032 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwk5n\" (UniqueName: \"kubernetes.io/projected/8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da-kube-api-access-vwk5n\") pod \"ovn-controller-ovs-c2xpq\" (UID: \"8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da\") " pod="openstack/ovn-controller-ovs-c2xpq" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.375062 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/10434904-135c-4ec2-a483-1647ce52500b-ovn-controller-tls-certs\") pod \"ovn-controller-2s4s6\" (UID: \"10434904-135c-4ec2-a483-1647ce52500b\") " pod="openstack/ovn-controller-2s4s6" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.375108 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwzt8\" (UniqueName: \"kubernetes.io/projected/10434904-135c-4ec2-a483-1647ce52500b-kube-api-access-hwzt8\") pod \"ovn-controller-2s4s6\" (UID: \"10434904-135c-4ec2-a483-1647ce52500b\") " pod="openstack/ovn-controller-2s4s6" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.375163 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/10434904-135c-4ec2-a483-1647ce52500b-var-run-ovn\") pod \"ovn-controller-2s4s6\" (UID: \"10434904-135c-4ec2-a483-1647ce52500b\") " pod="openstack/ovn-controller-2s4s6" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.375193 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da-var-log\") pod \"ovn-controller-ovs-c2xpq\" (UID: \"8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da\") " pod="openstack/ovn-controller-ovs-c2xpq" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.375214 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10434904-135c-4ec2-a483-1647ce52500b-combined-ca-bundle\") pod \"ovn-controller-2s4s6\" (UID: \"10434904-135c-4ec2-a483-1647ce52500b\") " pod="openstack/ovn-controller-2s4s6" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.375234 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/10434904-135c-4ec2-a483-1647ce52500b-var-run\") pod \"ovn-controller-2s4s6\" (UID: \"10434904-135c-4ec2-a483-1647ce52500b\") " pod="openstack/ovn-controller-2s4s6" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.375258 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/10434904-135c-4ec2-a483-1647ce52500b-var-log-ovn\") pod \"ovn-controller-2s4s6\" (UID: \"10434904-135c-4ec2-a483-1647ce52500b\") " pod="openstack/ovn-controller-2s4s6" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.375297 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da-etc-ovs\") pod \"ovn-controller-ovs-c2xpq\" (UID: \"8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da\") " pod="openstack/ovn-controller-ovs-c2xpq" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.375329 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10434904-135c-4ec2-a483-1647ce52500b-scripts\") pod \"ovn-controller-2s4s6\" (UID: \"10434904-135c-4ec2-a483-1647ce52500b\") " pod="openstack/ovn-controller-2s4s6" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.477093 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da-etc-ovs\") pod \"ovn-controller-ovs-c2xpq\" (UID: \"8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da\") " pod="openstack/ovn-controller-ovs-c2xpq" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.477150 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10434904-135c-4ec2-a483-1647ce52500b-scripts\") pod \"ovn-controller-2s4s6\" (UID: \"10434904-135c-4ec2-a483-1647ce52500b\") " pod="openstack/ovn-controller-2s4s6" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.477204 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da-scripts\") pod \"ovn-controller-ovs-c2xpq\" (UID: \"8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da\") " pod="openstack/ovn-controller-ovs-c2xpq" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.477226 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da-var-lib\") pod \"ovn-controller-ovs-c2xpq\" (UID: \"8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da\") " pod="openstack/ovn-controller-ovs-c2xpq" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.477243 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da-var-run\") pod \"ovn-controller-ovs-c2xpq\" (UID: \"8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da\") " pod="openstack/ovn-controller-ovs-c2xpq" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.477270 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwk5n\" (UniqueName: \"kubernetes.io/projected/8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da-kube-api-access-vwk5n\") pod \"ovn-controller-ovs-c2xpq\" (UID: \"8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da\") " pod="openstack/ovn-controller-ovs-c2xpq" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.477286 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/10434904-135c-4ec2-a483-1647ce52500b-ovn-controller-tls-certs\") pod \"ovn-controller-2s4s6\" (UID: \"10434904-135c-4ec2-a483-1647ce52500b\") " pod="openstack/ovn-controller-2s4s6" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.477304 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwzt8\" (UniqueName: \"kubernetes.io/projected/10434904-135c-4ec2-a483-1647ce52500b-kube-api-access-hwzt8\") pod \"ovn-controller-2s4s6\" (UID: \"10434904-135c-4ec2-a483-1647ce52500b\") " pod="openstack/ovn-controller-2s4s6" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.477319 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/10434904-135c-4ec2-a483-1647ce52500b-var-run-ovn\") pod \"ovn-controller-2s4s6\" (UID: \"10434904-135c-4ec2-a483-1647ce52500b\") " pod="openstack/ovn-controller-2s4s6" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.477334 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da-var-log\") pod \"ovn-controller-ovs-c2xpq\" (UID: \"8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da\") " pod="openstack/ovn-controller-ovs-c2xpq" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.477351 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10434904-135c-4ec2-a483-1647ce52500b-combined-ca-bundle\") pod \"ovn-controller-2s4s6\" (UID: \"10434904-135c-4ec2-a483-1647ce52500b\") " pod="openstack/ovn-controller-2s4s6" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.477370 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/10434904-135c-4ec2-a483-1647ce52500b-var-run\") pod \"ovn-controller-2s4s6\" (UID: \"10434904-135c-4ec2-a483-1647ce52500b\") " pod="openstack/ovn-controller-2s4s6" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.477385 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/10434904-135c-4ec2-a483-1647ce52500b-var-log-ovn\") pod \"ovn-controller-2s4s6\" (UID: \"10434904-135c-4ec2-a483-1647ce52500b\") " pod="openstack/ovn-controller-2s4s6" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.477719 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da-etc-ovs\") pod \"ovn-controller-ovs-c2xpq\" (UID: \"8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da\") " pod="openstack/ovn-controller-ovs-c2xpq" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.477808 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/10434904-135c-4ec2-a483-1647ce52500b-var-log-ovn\") pod \"ovn-controller-2s4s6\" (UID: \"10434904-135c-4ec2-a483-1647ce52500b\") " pod="openstack/ovn-controller-2s4s6" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.478552 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da-var-lib\") pod \"ovn-controller-ovs-c2xpq\" (UID: \"8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da\") " pod="openstack/ovn-controller-ovs-c2xpq" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.478653 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da-var-run\") pod \"ovn-controller-ovs-c2xpq\" (UID: \"8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da\") " pod="openstack/ovn-controller-ovs-c2xpq" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.478945 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da-var-log\") pod \"ovn-controller-ovs-c2xpq\" (UID: \"8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da\") " pod="openstack/ovn-controller-ovs-c2xpq" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.479064 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/10434904-135c-4ec2-a483-1647ce52500b-var-run-ovn\") pod \"ovn-controller-2s4s6\" (UID: \"10434904-135c-4ec2-a483-1647ce52500b\") " pod="openstack/ovn-controller-2s4s6" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.480209 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/10434904-135c-4ec2-a483-1647ce52500b-var-run\") pod \"ovn-controller-2s4s6\" (UID: \"10434904-135c-4ec2-a483-1647ce52500b\") " pod="openstack/ovn-controller-2s4s6" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.481313 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10434904-135c-4ec2-a483-1647ce52500b-scripts\") pod \"ovn-controller-2s4s6\" (UID: \"10434904-135c-4ec2-a483-1647ce52500b\") " pod="openstack/ovn-controller-2s4s6" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.482869 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10434904-135c-4ec2-a483-1647ce52500b-combined-ca-bundle\") pod \"ovn-controller-2s4s6\" (UID: \"10434904-135c-4ec2-a483-1647ce52500b\") " pod="openstack/ovn-controller-2s4s6" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.483305 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/10434904-135c-4ec2-a483-1647ce52500b-ovn-controller-tls-certs\") pod \"ovn-controller-2s4s6\" (UID: \"10434904-135c-4ec2-a483-1647ce52500b\") " pod="openstack/ovn-controller-2s4s6" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.483848 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da-scripts\") pod \"ovn-controller-ovs-c2xpq\" (UID: \"8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da\") " pod="openstack/ovn-controller-ovs-c2xpq" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.495218 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwk5n\" (UniqueName: \"kubernetes.io/projected/8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da-kube-api-access-vwk5n\") pod \"ovn-controller-ovs-c2xpq\" (UID: \"8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da\") " pod="openstack/ovn-controller-ovs-c2xpq" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.496780 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwzt8\" (UniqueName: \"kubernetes.io/projected/10434904-135c-4ec2-a483-1647ce52500b-kube-api-access-hwzt8\") pod \"ovn-controller-2s4s6\" (UID: \"10434904-135c-4ec2-a483-1647ce52500b\") " pod="openstack/ovn-controller-2s4s6" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.611776 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2s4s6" Jan 28 15:37:01 crc kubenswrapper[4871]: I0128 15:37:01.654459 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-c2xpq" Jan 28 15:37:02 crc kubenswrapper[4871]: I0128 15:37:02.722732 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 15:37:02 crc kubenswrapper[4871]: I0128 15:37:02.725577 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:02 crc kubenswrapper[4871]: I0128 15:37:02.728633 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-pnqfw" Jan 28 15:37:02 crc kubenswrapper[4871]: I0128 15:37:02.728876 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 28 15:37:02 crc kubenswrapper[4871]: I0128 15:37:02.728999 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 28 15:37:02 crc kubenswrapper[4871]: I0128 15:37:02.729085 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 28 15:37:02 crc kubenswrapper[4871]: I0128 15:37:02.729189 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 28 15:37:02 crc kubenswrapper[4871]: I0128 15:37:02.736518 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 15:37:02 crc kubenswrapper[4871]: I0128 15:37:02.898982 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8887227a-30f0-4a29-8018-2e18033b3b8f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:02 crc kubenswrapper[4871]: I0128 15:37:02.899057 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:02 crc kubenswrapper[4871]: I0128 15:37:02.899104 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8887227a-30f0-4a29-8018-2e18033b3b8f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:02 crc kubenswrapper[4871]: I0128 15:37:02.899145 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8887227a-30f0-4a29-8018-2e18033b3b8f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:02 crc kubenswrapper[4871]: I0128 15:37:02.899249 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8887227a-30f0-4a29-8018-2e18033b3b8f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:02 crc kubenswrapper[4871]: I0128 15:37:02.899346 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8887227a-30f0-4a29-8018-2e18033b3b8f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:02 crc kubenswrapper[4871]: I0128 15:37:02.899399 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmjdr\" (UniqueName: \"kubernetes.io/projected/8887227a-30f0-4a29-8018-2e18033b3b8f-kube-api-access-bmjdr\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:02 crc kubenswrapper[4871]: I0128 15:37:02.899507 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8887227a-30f0-4a29-8018-2e18033b3b8f-config\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:03 crc kubenswrapper[4871]: I0128 15:37:03.000717 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:03 crc kubenswrapper[4871]: I0128 15:37:03.001226 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8887227a-30f0-4a29-8018-2e18033b3b8f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:03 crc kubenswrapper[4871]: I0128 15:37:03.001251 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8887227a-30f0-4a29-8018-2e18033b3b8f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:03 crc kubenswrapper[4871]: I0128 15:37:03.001278 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8887227a-30f0-4a29-8018-2e18033b3b8f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:03 crc kubenswrapper[4871]: I0128 15:37:03.001314 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8887227a-30f0-4a29-8018-2e18033b3b8f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:03 crc kubenswrapper[4871]: I0128 15:37:03.001338 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmjdr\" (UniqueName: \"kubernetes.io/projected/8887227a-30f0-4a29-8018-2e18033b3b8f-kube-api-access-bmjdr\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:03 crc kubenswrapper[4871]: I0128 15:37:03.001362 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8887227a-30f0-4a29-8018-2e18033b3b8f-config\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:03 crc kubenswrapper[4871]: I0128 15:37:03.001412 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8887227a-30f0-4a29-8018-2e18033b3b8f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:03 crc kubenswrapper[4871]: I0128 15:37:03.002842 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8887227a-30f0-4a29-8018-2e18033b3b8f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:03 crc kubenswrapper[4871]: I0128 15:37:03.001156 4871 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:03 crc kubenswrapper[4871]: I0128 15:37:03.035779 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:03 crc kubenswrapper[4871]: I0128 15:37:03.039490 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8887227a-30f0-4a29-8018-2e18033b3b8f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:03 crc kubenswrapper[4871]: I0128 15:37:03.043368 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8887227a-30f0-4a29-8018-2e18033b3b8f-config\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:03 crc kubenswrapper[4871]: I0128 15:37:03.044816 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8887227a-30f0-4a29-8018-2e18033b3b8f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:03 crc kubenswrapper[4871]: I0128 15:37:03.062215 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8887227a-30f0-4a29-8018-2e18033b3b8f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:03 crc kubenswrapper[4871]: I0128 15:37:03.168319 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8887227a-30f0-4a29-8018-2e18033b3b8f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:03 crc kubenswrapper[4871]: I0128 15:37:03.169037 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmjdr\" (UniqueName: \"kubernetes.io/projected/8887227a-30f0-4a29-8018-2e18033b3b8f-kube-api-access-bmjdr\") pod \"ovsdbserver-nb-0\" (UID: \"8887227a-30f0-4a29-8018-2e18033b3b8f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:03 crc kubenswrapper[4871]: I0128 15:37:03.345430 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.548663 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.550889 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.553298 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.553532 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.554147 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.554423 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-xnpmn" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.557185 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.576789 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fc41ff5-8884-408d-94ca-512e6c34e2d3-config\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.576842 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jx7b\" (UniqueName: \"kubernetes.io/projected/3fc41ff5-8884-408d-94ca-512e6c34e2d3-kube-api-access-9jx7b\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.576998 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fc41ff5-8884-408d-94ca-512e6c34e2d3-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.577040 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.577100 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3fc41ff5-8884-408d-94ca-512e6c34e2d3-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.577128 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fc41ff5-8884-408d-94ca-512e6c34e2d3-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.577192 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fc41ff5-8884-408d-94ca-512e6c34e2d3-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.577215 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc41ff5-8884-408d-94ca-512e6c34e2d3-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.678357 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fc41ff5-8884-408d-94ca-512e6c34e2d3-config\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.678411 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jx7b\" (UniqueName: \"kubernetes.io/projected/3fc41ff5-8884-408d-94ca-512e6c34e2d3-kube-api-access-9jx7b\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.678454 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fc41ff5-8884-408d-94ca-512e6c34e2d3-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.678477 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.678525 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3fc41ff5-8884-408d-94ca-512e6c34e2d3-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.678555 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fc41ff5-8884-408d-94ca-512e6c34e2d3-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.678640 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fc41ff5-8884-408d-94ca-512e6c34e2d3-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.678665 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc41ff5-8884-408d-94ca-512e6c34e2d3-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.679286 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fc41ff5-8884-408d-94ca-512e6c34e2d3-config\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.679546 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3fc41ff5-8884-408d-94ca-512e6c34e2d3-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.679709 4871 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.679910 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fc41ff5-8884-408d-94ca-512e6c34e2d3-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.686701 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fc41ff5-8884-408d-94ca-512e6c34e2d3-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.686721 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fc41ff5-8884-408d-94ca-512e6c34e2d3-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.688181 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc41ff5-8884-408d-94ca-512e6c34e2d3-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.699730 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jx7b\" (UniqueName: \"kubernetes.io/projected/3fc41ff5-8884-408d-94ca-512e6c34e2d3-kube-api-access-9jx7b\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.722151 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3fc41ff5-8884-408d-94ca-512e6c34e2d3\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:04 crc kubenswrapper[4871]: I0128 15:37:04.928801 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:11 crc kubenswrapper[4871]: E0128 15:37:11.943037 4871 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 15:37:11 crc kubenswrapper[4871]: E0128 15:37:11.943791 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v4wqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-tq5mb_openstack(03215699-61b8-4b69-9fe4-fbee746d5f61): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:11 crc kubenswrapper[4871]: E0128 15:37:11.945639 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-tq5mb" podUID="03215699-61b8-4b69-9fe4-fbee746d5f61" Jan 28 15:37:12 crc kubenswrapper[4871]: I0128 15:37:12.039110 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 15:37:12 crc kubenswrapper[4871]: I0128 15:37:12.048258 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 15:37:12 crc kubenswrapper[4871]: I0128 15:37:12.054386 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 15:37:12 crc kubenswrapper[4871]: I0128 15:37:12.149667 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 15:37:12 crc kubenswrapper[4871]: I0128 15:37:12.248729 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2","Type":"ContainerStarted","Data":"57c306120546f44e285e2f198c734d2d9d5b397973796b7e060090af561a8231"} Jan 28 15:37:12 crc kubenswrapper[4871]: I0128 15:37:12.250165 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"03065e0e-cdb6-49a2-bfe3-28236f770fdc","Type":"ContainerStarted","Data":"6eed611bc0ea7239697b7762d654f9e72a241fbcb59c784cb94b4c534dd9264b"} Jan 28 15:37:12 crc kubenswrapper[4871]: I0128 15:37:12.251705 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"634ee164-2990-4b2b-88e4-ce901728e251","Type":"ContainerStarted","Data":"806c51b3b54a4f525da4b865ee504614e58a232c1c98f3eceffdef210cb42876"} Jan 28 15:37:12 crc kubenswrapper[4871]: I0128 15:37:12.253056 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"7241d8aa-248e-46a8-88af-365415f843f8","Type":"ContainerStarted","Data":"67de82d79e3dd7bd6fee40b78b8f9905e19ee48ea2cd7c3bb8b5b102ce20579c"} Jan 28 15:37:12 crc kubenswrapper[4871]: I0128 15:37:12.469805 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2s4s6"] Jan 28 15:37:12 crc kubenswrapper[4871]: I0128 15:37:12.484265 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 15:37:12 crc kubenswrapper[4871]: I0128 15:37:12.492067 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 15:37:12 crc kubenswrapper[4871]: I0128 15:37:12.575005 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 15:37:12 crc kubenswrapper[4871]: I0128 15:37:12.575501 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tq5mb" Jan 28 15:37:12 crc kubenswrapper[4871]: I0128 15:37:12.649654 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4wqk\" (UniqueName: \"kubernetes.io/projected/03215699-61b8-4b69-9fe4-fbee746d5f61-kube-api-access-v4wqk\") pod \"03215699-61b8-4b69-9fe4-fbee746d5f61\" (UID: \"03215699-61b8-4b69-9fe4-fbee746d5f61\") " Jan 28 15:37:12 crc kubenswrapper[4871]: I0128 15:37:12.649876 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03215699-61b8-4b69-9fe4-fbee746d5f61-config\") pod \"03215699-61b8-4b69-9fe4-fbee746d5f61\" (UID: \"03215699-61b8-4b69-9fe4-fbee746d5f61\") " Jan 28 15:37:12 crc kubenswrapper[4871]: I0128 15:37:12.651018 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03215699-61b8-4b69-9fe4-fbee746d5f61-config" (OuterVolumeSpecName: "config") pod "03215699-61b8-4b69-9fe4-fbee746d5f61" (UID: "03215699-61b8-4b69-9fe4-fbee746d5f61"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:12 crc kubenswrapper[4871]: I0128 15:37:12.657408 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03215699-61b8-4b69-9fe4-fbee746d5f61-kube-api-access-v4wqk" (OuterVolumeSpecName: "kube-api-access-v4wqk") pod "03215699-61b8-4b69-9fe4-fbee746d5f61" (UID: "03215699-61b8-4b69-9fe4-fbee746d5f61"). InnerVolumeSpecName "kube-api-access-v4wqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:12 crc kubenswrapper[4871]: I0128 15:37:12.686971 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-c2xpq"] Jan 28 15:37:12 crc kubenswrapper[4871]: W0128 15:37:12.696101 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fcf3e7f_c499_44a8_a2c8_ddb97f31b7da.slice/crio-ff2a494b9ad98d5b6ebff9af43d70a81d7ebd1c5d3aa6ef51d74cc3017c4d4e8 WatchSource:0}: Error finding container ff2a494b9ad98d5b6ebff9af43d70a81d7ebd1c5d3aa6ef51d74cc3017c4d4e8: Status 404 returned error can't find the container with id ff2a494b9ad98d5b6ebff9af43d70a81d7ebd1c5d3aa6ef51d74cc3017c4d4e8 Jan 28 15:37:12 crc kubenswrapper[4871]: I0128 15:37:12.752529 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03215699-61b8-4b69-9fe4-fbee746d5f61-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:12 crc kubenswrapper[4871]: I0128 15:37:12.752610 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4wqk\" (UniqueName: \"kubernetes.io/projected/03215699-61b8-4b69-9fe4-fbee746d5f61-kube-api-access-v4wqk\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:13 crc kubenswrapper[4871]: I0128 15:37:13.261368 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2s4s6" event={"ID":"10434904-135c-4ec2-a483-1647ce52500b","Type":"ContainerStarted","Data":"f6bfa7fe17aded9181c63661bf31b11d08f270d1dbe2cdf29873f4e3a312c8d7"} Jan 28 15:37:13 crc kubenswrapper[4871]: I0128 15:37:13.262973 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-tq5mb" event={"ID":"03215699-61b8-4b69-9fe4-fbee746d5f61","Type":"ContainerDied","Data":"ee55409790b8c09363e1dc54527b6204f6ef176b528a46b41b70164301ebb2cf"} Jan 28 15:37:13 crc kubenswrapper[4871]: I0128 15:37:13.263069 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tq5mb" Jan 28 15:37:13 crc kubenswrapper[4871]: I0128 15:37:13.266387 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"24325972-e640-4b7b-b5c9-215dd8cd0fea","Type":"ContainerStarted","Data":"49e3b6d6f76b6c7e3e88895327d9c5af7bdd489296c1f8c405ee35ac6fb39d10"} Jan 28 15:37:13 crc kubenswrapper[4871]: I0128 15:37:13.268207 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c2xpq" event={"ID":"8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da","Type":"ContainerStarted","Data":"ff2a494b9ad98d5b6ebff9af43d70a81d7ebd1c5d3aa6ef51d74cc3017c4d4e8"} Jan 28 15:37:13 crc kubenswrapper[4871]: I0128 15:37:13.269766 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8887227a-30f0-4a29-8018-2e18033b3b8f","Type":"ContainerStarted","Data":"6c8deaf8193126390daf6923482a6faa2292d60663fbe2ccfe21f5c355449883"} Jan 28 15:37:13 crc kubenswrapper[4871]: I0128 15:37:13.271334 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"48f16980-86d0-4648-9ebd-a428b5253832","Type":"ContainerStarted","Data":"4e2630071632109f7267643ec3a796e00b2b76199ac24f2373e0627a0dd84479"} Jan 28 15:37:13 crc kubenswrapper[4871]: I0128 15:37:13.313276 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tq5mb"] Jan 28 15:37:13 crc kubenswrapper[4871]: I0128 15:37:13.321027 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tq5mb"] Jan 28 15:37:13 crc kubenswrapper[4871]: I0128 15:37:13.395917 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 15:37:13 crc kubenswrapper[4871]: W0128 15:37:13.403324 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fc41ff5_8884_408d_94ca_512e6c34e2d3.slice/crio-42612dc050f12c5f400301d631e1aa0873eccb0ff1664e281d97ad5d26982786 WatchSource:0}: Error finding container 42612dc050f12c5f400301d631e1aa0873eccb0ff1664e281d97ad5d26982786: Status 404 returned error can't find the container with id 42612dc050f12c5f400301d631e1aa0873eccb0ff1664e281d97ad5d26982786 Jan 28 15:37:13 crc kubenswrapper[4871]: I0128 15:37:13.906362 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-nl92r"] Jan 28 15:37:13 crc kubenswrapper[4871]: I0128 15:37:13.907651 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-nl92r" Jan 28 15:37:13 crc kubenswrapper[4871]: I0128 15:37:13.911540 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 28 15:37:13 crc kubenswrapper[4871]: I0128 15:37:13.937573 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-nl92r"] Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.044058 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vp4mw"] Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.072672 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-ngql8"] Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.074155 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-ngql8" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.074857 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5gnh\" (UniqueName: \"kubernetes.io/projected/81859795-4888-4eae-8589-5a5a4992d584-kube-api-access-n5gnh\") pod \"ovn-controller-metrics-nl92r\" (UID: \"81859795-4888-4eae-8589-5a5a4992d584\") " pod="openstack/ovn-controller-metrics-nl92r" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.074905 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81859795-4888-4eae-8589-5a5a4992d584-combined-ca-bundle\") pod \"ovn-controller-metrics-nl92r\" (UID: \"81859795-4888-4eae-8589-5a5a4992d584\") " pod="openstack/ovn-controller-metrics-nl92r" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.075915 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.076493 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/81859795-4888-4eae-8589-5a5a4992d584-ovs-rundir\") pod \"ovn-controller-metrics-nl92r\" (UID: \"81859795-4888-4eae-8589-5a5a4992d584\") " pod="openstack/ovn-controller-metrics-nl92r" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.076542 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/81859795-4888-4eae-8589-5a5a4992d584-ovn-rundir\") pod \"ovn-controller-metrics-nl92r\" (UID: \"81859795-4888-4eae-8589-5a5a4992d584\") " pod="openstack/ovn-controller-metrics-nl92r" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.076629 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/81859795-4888-4eae-8589-5a5a4992d584-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-nl92r\" (UID: \"81859795-4888-4eae-8589-5a5a4992d584\") " pod="openstack/ovn-controller-metrics-nl92r" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.076656 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81859795-4888-4eae-8589-5a5a4992d584-config\") pod \"ovn-controller-metrics-nl92r\" (UID: \"81859795-4888-4eae-8589-5a5a4992d584\") " pod="openstack/ovn-controller-metrics-nl92r" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.092759 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-ngql8"] Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.165018 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-69txw"] Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.178047 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/118fc506-61f1-42e4-ac40-1442d20e4708-config\") pod \"dnsmasq-dns-7fd796d7df-ngql8\" (UID: \"118fc506-61f1-42e4-ac40-1442d20e4708\") " pod="openstack/dnsmasq-dns-7fd796d7df-ngql8" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.178186 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/81859795-4888-4eae-8589-5a5a4992d584-ovs-rundir\") pod \"ovn-controller-metrics-nl92r\" (UID: \"81859795-4888-4eae-8589-5a5a4992d584\") " pod="openstack/ovn-controller-metrics-nl92r" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.178217 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/81859795-4888-4eae-8589-5a5a4992d584-ovn-rundir\") pod \"ovn-controller-metrics-nl92r\" (UID: \"81859795-4888-4eae-8589-5a5a4992d584\") " pod="openstack/ovn-controller-metrics-nl92r" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.178236 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/81859795-4888-4eae-8589-5a5a4992d584-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-nl92r\" (UID: \"81859795-4888-4eae-8589-5a5a4992d584\") " pod="openstack/ovn-controller-metrics-nl92r" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.178260 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81859795-4888-4eae-8589-5a5a4992d584-config\") pod \"ovn-controller-metrics-nl92r\" (UID: \"81859795-4888-4eae-8589-5a5a4992d584\") " pod="openstack/ovn-controller-metrics-nl92r" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.178553 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/81859795-4888-4eae-8589-5a5a4992d584-ovn-rundir\") pod \"ovn-controller-metrics-nl92r\" (UID: \"81859795-4888-4eae-8589-5a5a4992d584\") " pod="openstack/ovn-controller-metrics-nl92r" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.178576 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/81859795-4888-4eae-8589-5a5a4992d584-ovs-rundir\") pod \"ovn-controller-metrics-nl92r\" (UID: \"81859795-4888-4eae-8589-5a5a4992d584\") " pod="openstack/ovn-controller-metrics-nl92r" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.178672 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5gnh\" (UniqueName: \"kubernetes.io/projected/81859795-4888-4eae-8589-5a5a4992d584-kube-api-access-n5gnh\") pod \"ovn-controller-metrics-nl92r\" (UID: \"81859795-4888-4eae-8589-5a5a4992d584\") " pod="openstack/ovn-controller-metrics-nl92r" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.178716 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81859795-4888-4eae-8589-5a5a4992d584-combined-ca-bundle\") pod \"ovn-controller-metrics-nl92r\" (UID: \"81859795-4888-4eae-8589-5a5a4992d584\") " pod="openstack/ovn-controller-metrics-nl92r" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.178994 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/118fc506-61f1-42e4-ac40-1442d20e4708-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-ngql8\" (UID: \"118fc506-61f1-42e4-ac40-1442d20e4708\") " pod="openstack/dnsmasq-dns-7fd796d7df-ngql8" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.179334 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81859795-4888-4eae-8589-5a5a4992d584-config\") pod \"ovn-controller-metrics-nl92r\" (UID: \"81859795-4888-4eae-8589-5a5a4992d584\") " pod="openstack/ovn-controller-metrics-nl92r" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.179431 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q26z\" (UniqueName: \"kubernetes.io/projected/118fc506-61f1-42e4-ac40-1442d20e4708-kube-api-access-4q26z\") pod \"dnsmasq-dns-7fd796d7df-ngql8\" (UID: \"118fc506-61f1-42e4-ac40-1442d20e4708\") " pod="openstack/dnsmasq-dns-7fd796d7df-ngql8" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.179462 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/118fc506-61f1-42e4-ac40-1442d20e4708-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-ngql8\" (UID: \"118fc506-61f1-42e4-ac40-1442d20e4708\") " pod="openstack/dnsmasq-dns-7fd796d7df-ngql8" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.183262 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81859795-4888-4eae-8589-5a5a4992d584-combined-ca-bundle\") pod \"ovn-controller-metrics-nl92r\" (UID: \"81859795-4888-4eae-8589-5a5a4992d584\") " pod="openstack/ovn-controller-metrics-nl92r" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.199377 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/81859795-4888-4eae-8589-5a5a4992d584-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-nl92r\" (UID: \"81859795-4888-4eae-8589-5a5a4992d584\") " pod="openstack/ovn-controller-metrics-nl92r" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.200213 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5gnh\" (UniqueName: \"kubernetes.io/projected/81859795-4888-4eae-8589-5a5a4992d584-kube-api-access-n5gnh\") pod \"ovn-controller-metrics-nl92r\" (UID: \"81859795-4888-4eae-8589-5a5a4992d584\") " pod="openstack/ovn-controller-metrics-nl92r" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.206678 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-98wd7"] Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.208239 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.211023 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.220845 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-98wd7"] Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.233106 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-nl92r" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.280209 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/118fc506-61f1-42e4-ac40-1442d20e4708-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-ngql8\" (UID: \"118fc506-61f1-42e4-ac40-1442d20e4708\") " pod="openstack/dnsmasq-dns-7fd796d7df-ngql8" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.280318 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q26z\" (UniqueName: \"kubernetes.io/projected/118fc506-61f1-42e4-ac40-1442d20e4708-kube-api-access-4q26z\") pod \"dnsmasq-dns-7fd796d7df-ngql8\" (UID: \"118fc506-61f1-42e4-ac40-1442d20e4708\") " pod="openstack/dnsmasq-dns-7fd796d7df-ngql8" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.280349 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/118fc506-61f1-42e4-ac40-1442d20e4708-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-ngql8\" (UID: \"118fc506-61f1-42e4-ac40-1442d20e4708\") " pod="openstack/dnsmasq-dns-7fd796d7df-ngql8" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.280385 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/118fc506-61f1-42e4-ac40-1442d20e4708-config\") pod \"dnsmasq-dns-7fd796d7df-ngql8\" (UID: \"118fc506-61f1-42e4-ac40-1442d20e4708\") " pod="openstack/dnsmasq-dns-7fd796d7df-ngql8" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.280418 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-98wd7\" (UID: \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\") " pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.280466 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-98wd7\" (UID: \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\") " pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.280489 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-98wd7\" (UID: \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\") " pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.280506 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-config\") pod \"dnsmasq-dns-86db49b7ff-98wd7\" (UID: \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\") " pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.280525 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tlzd\" (UniqueName: \"kubernetes.io/projected/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-kube-api-access-8tlzd\") pod \"dnsmasq-dns-86db49b7ff-98wd7\" (UID: \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\") " pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.281235 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/118fc506-61f1-42e4-ac40-1442d20e4708-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-ngql8\" (UID: \"118fc506-61f1-42e4-ac40-1442d20e4708\") " pod="openstack/dnsmasq-dns-7fd796d7df-ngql8" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.281502 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/118fc506-61f1-42e4-ac40-1442d20e4708-config\") pod \"dnsmasq-dns-7fd796d7df-ngql8\" (UID: \"118fc506-61f1-42e4-ac40-1442d20e4708\") " pod="openstack/dnsmasq-dns-7fd796d7df-ngql8" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.281699 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/118fc506-61f1-42e4-ac40-1442d20e4708-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-ngql8\" (UID: \"118fc506-61f1-42e4-ac40-1442d20e4708\") " pod="openstack/dnsmasq-dns-7fd796d7df-ngql8" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.282921 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3fc41ff5-8884-408d-94ca-512e6c34e2d3","Type":"ContainerStarted","Data":"42612dc050f12c5f400301d631e1aa0873eccb0ff1664e281d97ad5d26982786"} Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.297501 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q26z\" (UniqueName: \"kubernetes.io/projected/118fc506-61f1-42e4-ac40-1442d20e4708-kube-api-access-4q26z\") pod \"dnsmasq-dns-7fd796d7df-ngql8\" (UID: \"118fc506-61f1-42e4-ac40-1442d20e4708\") " pod="openstack/dnsmasq-dns-7fd796d7df-ngql8" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.381143 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-98wd7\" (UID: \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\") " pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.381199 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-98wd7\" (UID: \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\") " pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.381224 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-98wd7\" (UID: \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\") " pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.381242 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-config\") pod \"dnsmasq-dns-86db49b7ff-98wd7\" (UID: \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\") " pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.381260 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tlzd\" (UniqueName: \"kubernetes.io/projected/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-kube-api-access-8tlzd\") pod \"dnsmasq-dns-86db49b7ff-98wd7\" (UID: \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\") " pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.382877 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-98wd7\" (UID: \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\") " pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.382890 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-98wd7\" (UID: \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\") " pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.383023 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-98wd7\" (UID: \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\") " pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.383612 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-config\") pod \"dnsmasq-dns-86db49b7ff-98wd7\" (UID: \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\") " pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.387879 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-ngql8" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.400275 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tlzd\" (UniqueName: \"kubernetes.io/projected/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-kube-api-access-8tlzd\") pod \"dnsmasq-dns-86db49b7ff-98wd7\" (UID: \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\") " pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.564989 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.678783 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-nl92r"] Jan 28 15:37:14 crc kubenswrapper[4871]: W0128 15:37:14.680342 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81859795_4888_4eae_8589_5a5a4992d584.slice/crio-ba63ec090d2379284842430df27cfc42f045bfb5bc507d715d9d45a5a15bdc59 WatchSource:0}: Error finding container ba63ec090d2379284842430df27cfc42f045bfb5bc507d715d9d45a5a15bdc59: Status 404 returned error can't find the container with id ba63ec090d2379284842430df27cfc42f045bfb5bc507d715d9d45a5a15bdc59 Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.889799 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-ngql8"] Jan 28 15:37:14 crc kubenswrapper[4871]: W0128 15:37:14.895790 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod118fc506_61f1_42e4_ac40_1442d20e4708.slice/crio-49d6a2111a6c7d0c822a7991daddc86544d016299ab288f30a11d847ed659b16 WatchSource:0}: Error finding container 49d6a2111a6c7d0c822a7991daddc86544d016299ab288f30a11d847ed659b16: Status 404 returned error can't find the container with id 49d6a2111a6c7d0c822a7991daddc86544d016299ab288f30a11d847ed659b16 Jan 28 15:37:14 crc kubenswrapper[4871]: I0128 15:37:14.914047 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03215699-61b8-4b69-9fe4-fbee746d5f61" path="/var/lib/kubelet/pods/03215699-61b8-4b69-9fe4-fbee746d5f61/volumes" Jan 28 15:37:15 crc kubenswrapper[4871]: I0128 15:37:15.024348 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-98wd7"] Jan 28 15:37:15 crc kubenswrapper[4871]: W0128 15:37:15.030651 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a3637bf_9cf3_47b6_ae0e_e77c0476e060.slice/crio-c5a183e29d0025f7833cf915276e966620f20d39357620e5086fa719b4c6d925 WatchSource:0}: Error finding container c5a183e29d0025f7833cf915276e966620f20d39357620e5086fa719b4c6d925: Status 404 returned error can't find the container with id c5a183e29d0025f7833cf915276e966620f20d39357620e5086fa719b4c6d925 Jan 28 15:37:15 crc kubenswrapper[4871]: I0128 15:37:15.450250 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" event={"ID":"3a3637bf-9cf3-47b6-ae0e-e77c0476e060","Type":"ContainerStarted","Data":"c5a183e29d0025f7833cf915276e966620f20d39357620e5086fa719b4c6d925"} Jan 28 15:37:15 crc kubenswrapper[4871]: I0128 15:37:15.452627 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-ngql8" event={"ID":"118fc506-61f1-42e4-ac40-1442d20e4708","Type":"ContainerStarted","Data":"49d6a2111a6c7d0c822a7991daddc86544d016299ab288f30a11d847ed659b16"} Jan 28 15:37:15 crc kubenswrapper[4871]: I0128 15:37:15.453957 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-nl92r" event={"ID":"81859795-4888-4eae-8589-5a5a4992d584","Type":"ContainerStarted","Data":"ba63ec090d2379284842430df27cfc42f045bfb5bc507d715d9d45a5a15bdc59"} Jan 28 15:37:16 crc kubenswrapper[4871]: E0128 15:37:16.287944 4871 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 15:37:16 crc kubenswrapper[4871]: E0128 15:37:16.288902 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5sx8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-lps7c_openstack(86d7b0fe-3e9f-433d-907a-1d9714dd4d1a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:16 crc kubenswrapper[4871]: E0128 15:37:16.290278 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-lps7c" podUID="86d7b0fe-3e9f-433d-907a-1d9714dd4d1a" Jan 28 15:37:16 crc kubenswrapper[4871]: I0128 15:37:16.766760 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-lps7c" Jan 28 15:37:16 crc kubenswrapper[4871]: I0128 15:37:16.860397 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86d7b0fe-3e9f-433d-907a-1d9714dd4d1a-config\") pod \"86d7b0fe-3e9f-433d-907a-1d9714dd4d1a\" (UID: \"86d7b0fe-3e9f-433d-907a-1d9714dd4d1a\") " Jan 28 15:37:16 crc kubenswrapper[4871]: I0128 15:37:16.860497 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86d7b0fe-3e9f-433d-907a-1d9714dd4d1a-dns-svc\") pod \"86d7b0fe-3e9f-433d-907a-1d9714dd4d1a\" (UID: \"86d7b0fe-3e9f-433d-907a-1d9714dd4d1a\") " Jan 28 15:37:16 crc kubenswrapper[4871]: I0128 15:37:16.860549 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sx8r\" (UniqueName: \"kubernetes.io/projected/86d7b0fe-3e9f-433d-907a-1d9714dd4d1a-kube-api-access-5sx8r\") pod \"86d7b0fe-3e9f-433d-907a-1d9714dd4d1a\" (UID: \"86d7b0fe-3e9f-433d-907a-1d9714dd4d1a\") " Jan 28 15:37:16 crc kubenswrapper[4871]: I0128 15:37:16.860975 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86d7b0fe-3e9f-433d-907a-1d9714dd4d1a-config" (OuterVolumeSpecName: "config") pod "86d7b0fe-3e9f-433d-907a-1d9714dd4d1a" (UID: "86d7b0fe-3e9f-433d-907a-1d9714dd4d1a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:16 crc kubenswrapper[4871]: I0128 15:37:16.861417 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86d7b0fe-3e9f-433d-907a-1d9714dd4d1a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "86d7b0fe-3e9f-433d-907a-1d9714dd4d1a" (UID: "86d7b0fe-3e9f-433d-907a-1d9714dd4d1a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:16 crc kubenswrapper[4871]: I0128 15:37:16.938660 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86d7b0fe-3e9f-433d-907a-1d9714dd4d1a-kube-api-access-5sx8r" (OuterVolumeSpecName: "kube-api-access-5sx8r") pod "86d7b0fe-3e9f-433d-907a-1d9714dd4d1a" (UID: "86d7b0fe-3e9f-433d-907a-1d9714dd4d1a"). InnerVolumeSpecName "kube-api-access-5sx8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:17 crc kubenswrapper[4871]: I0128 15:37:17.171127 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86d7b0fe-3e9f-433d-907a-1d9714dd4d1a-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:17 crc kubenswrapper[4871]: I0128 15:37:17.171167 4871 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86d7b0fe-3e9f-433d-907a-1d9714dd4d1a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:17 crc kubenswrapper[4871]: I0128 15:37:17.171180 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sx8r\" (UniqueName: \"kubernetes.io/projected/86d7b0fe-3e9f-433d-907a-1d9714dd4d1a-kube-api-access-5sx8r\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:17 crc kubenswrapper[4871]: I0128 15:37:17.474073 4871 generic.go:334] "Generic (PLEG): container finished" podID="8f1a59c5-9648-4b50-9131-83a436f5e6cf" containerID="3c65d00dca9b627262574163bd9a0117d3625ee242da609baacf96677a9147b6" exitCode=0 Jan 28 15:37:17 crc kubenswrapper[4871]: I0128 15:37:17.474165 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-69txw" event={"ID":"8f1a59c5-9648-4b50-9131-83a436f5e6cf","Type":"ContainerDied","Data":"3c65d00dca9b627262574163bd9a0117d3625ee242da609baacf96677a9147b6"} Jan 28 15:37:17 crc kubenswrapper[4871]: I0128 15:37:17.480156 4871 generic.go:334] "Generic (PLEG): container finished" podID="118fc506-61f1-42e4-ac40-1442d20e4708" containerID="1767e6abe2cd8134c3d6b059161c0fcd3e4913921a434914ec37dd90cbc9a045" exitCode=0 Jan 28 15:37:17 crc kubenswrapper[4871]: I0128 15:37:17.480941 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-ngql8" event={"ID":"118fc506-61f1-42e4-ac40-1442d20e4708","Type":"ContainerDied","Data":"1767e6abe2cd8134c3d6b059161c0fcd3e4913921a434914ec37dd90cbc9a045"} Jan 28 15:37:17 crc kubenswrapper[4871]: I0128 15:37:17.483548 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-lps7c" event={"ID":"86d7b0fe-3e9f-433d-907a-1d9714dd4d1a","Type":"ContainerDied","Data":"eb3a523e637557a0d2a3fe0fb96e87e75c434d470b75d59ef8e1dcc8d9038685"} Jan 28 15:37:17 crc kubenswrapper[4871]: I0128 15:37:17.483556 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-lps7c" Jan 28 15:37:17 crc kubenswrapper[4871]: I0128 15:37:17.486859 4871 generic.go:334] "Generic (PLEG): container finished" podID="3a3637bf-9cf3-47b6-ae0e-e77c0476e060" containerID="689be453253dfc8ecb69ee356f4acdcee93625122871952a90d3e930df05319e" exitCode=0 Jan 28 15:37:17 crc kubenswrapper[4871]: I0128 15:37:17.487022 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" event={"ID":"3a3637bf-9cf3-47b6-ae0e-e77c0476e060","Type":"ContainerDied","Data":"689be453253dfc8ecb69ee356f4acdcee93625122871952a90d3e930df05319e"} Jan 28 15:37:17 crc kubenswrapper[4871]: I0128 15:37:17.517984 4871 generic.go:334] "Generic (PLEG): container finished" podID="eb7bc661-06ed-4edf-afd4-0d8b48c8b738" containerID="44d2d4072d59a05c60e20a6b18cc595aa65b753416f6f3bdf19b9a2621c79267" exitCode=0 Jan 28 15:37:17 crc kubenswrapper[4871]: I0128 15:37:17.518021 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vp4mw" event={"ID":"eb7bc661-06ed-4edf-afd4-0d8b48c8b738","Type":"ContainerDied","Data":"44d2d4072d59a05c60e20a6b18cc595aa65b753416f6f3bdf19b9a2621c79267"} Jan 28 15:37:17 crc kubenswrapper[4871]: I0128 15:37:17.616280 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-lps7c"] Jan 28 15:37:17 crc kubenswrapper[4871]: I0128 15:37:17.627219 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-lps7c"] Jan 28 15:37:18 crc kubenswrapper[4871]: I0128 15:37:18.920999 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86d7b0fe-3e9f-433d-907a-1d9714dd4d1a" path="/var/lib/kubelet/pods/86d7b0fe-3e9f-433d-907a-1d9714dd4d1a/volumes" Jan 28 15:37:20 crc kubenswrapper[4871]: I0128 15:37:20.736342 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-69txw" Jan 28 15:37:20 crc kubenswrapper[4871]: I0128 15:37:20.844143 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f1a59c5-9648-4b50-9131-83a436f5e6cf-config\") pod \"8f1a59c5-9648-4b50-9131-83a436f5e6cf\" (UID: \"8f1a59c5-9648-4b50-9131-83a436f5e6cf\") " Jan 28 15:37:20 crc kubenswrapper[4871]: I0128 15:37:20.844230 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgxqx\" (UniqueName: \"kubernetes.io/projected/8f1a59c5-9648-4b50-9131-83a436f5e6cf-kube-api-access-wgxqx\") pod \"8f1a59c5-9648-4b50-9131-83a436f5e6cf\" (UID: \"8f1a59c5-9648-4b50-9131-83a436f5e6cf\") " Jan 28 15:37:20 crc kubenswrapper[4871]: I0128 15:37:20.844270 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f1a59c5-9648-4b50-9131-83a436f5e6cf-dns-svc\") pod \"8f1a59c5-9648-4b50-9131-83a436f5e6cf\" (UID: \"8f1a59c5-9648-4b50-9131-83a436f5e6cf\") " Jan 28 15:37:20 crc kubenswrapper[4871]: I0128 15:37:20.849327 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f1a59c5-9648-4b50-9131-83a436f5e6cf-kube-api-access-wgxqx" (OuterVolumeSpecName: "kube-api-access-wgxqx") pod "8f1a59c5-9648-4b50-9131-83a436f5e6cf" (UID: "8f1a59c5-9648-4b50-9131-83a436f5e6cf"). InnerVolumeSpecName "kube-api-access-wgxqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:20 crc kubenswrapper[4871]: I0128 15:37:20.869579 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f1a59c5-9648-4b50-9131-83a436f5e6cf-config" (OuterVolumeSpecName: "config") pod "8f1a59c5-9648-4b50-9131-83a436f5e6cf" (UID: "8f1a59c5-9648-4b50-9131-83a436f5e6cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:20 crc kubenswrapper[4871]: I0128 15:37:20.879655 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f1a59c5-9648-4b50-9131-83a436f5e6cf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8f1a59c5-9648-4b50-9131-83a436f5e6cf" (UID: "8f1a59c5-9648-4b50-9131-83a436f5e6cf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:20 crc kubenswrapper[4871]: I0128 15:37:20.945954 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f1a59c5-9648-4b50-9131-83a436f5e6cf-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:20 crc kubenswrapper[4871]: I0128 15:37:20.945983 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgxqx\" (UniqueName: \"kubernetes.io/projected/8f1a59c5-9648-4b50-9131-83a436f5e6cf-kube-api-access-wgxqx\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:20 crc kubenswrapper[4871]: I0128 15:37:20.945993 4871 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f1a59c5-9648-4b50-9131-83a436f5e6cf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:21 crc kubenswrapper[4871]: I0128 15:37:21.558873 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-69txw" event={"ID":"8f1a59c5-9648-4b50-9131-83a436f5e6cf","Type":"ContainerDied","Data":"de3065a1d1309770579012b63c52fb70e33ae62bf2bc03f0bb4477337aeddc14"} Jan 28 15:37:21 crc kubenswrapper[4871]: I0128 15:37:21.558937 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-69txw" Jan 28 15:37:21 crc kubenswrapper[4871]: I0128 15:37:21.558971 4871 scope.go:117] "RemoveContainer" containerID="3c65d00dca9b627262574163bd9a0117d3625ee242da609baacf96677a9147b6" Jan 28 15:37:21 crc kubenswrapper[4871]: I0128 15:37:21.617109 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-69txw"] Jan 28 15:37:21 crc kubenswrapper[4871]: I0128 15:37:21.626845 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-69txw"] Jan 28 15:37:22 crc kubenswrapper[4871]: I0128 15:37:22.926490 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f1a59c5-9648-4b50-9131-83a436f5e6cf" path="/var/lib/kubelet/pods/8f1a59c5-9648-4b50-9131-83a436f5e6cf/volumes" Jan 28 15:37:28 crc kubenswrapper[4871]: I0128 15:37:28.532026 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vp4mw" Jan 28 15:37:28 crc kubenswrapper[4871]: E0128 15:37:28.598681 4871 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" Jan 28 15:37:28 crc kubenswrapper[4871]: E0128 15:37:28.598900 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n4h64chbh7fh5bch6fh654h668h89hf7h586h68bh5c6h94h5d7h5cfh67bh675h68fhbfh649hdch657h586h599h544h97hf8h6hb7h675hdfq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwk5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-c2xpq_openstack(8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:28 crc kubenswrapper[4871]: E0128 15:37:28.600107 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-c2xpq" podUID="8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da" Jan 28 15:37:28 crc kubenswrapper[4871]: I0128 15:37:28.649279 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vp4mw" Jan 28 15:37:28 crc kubenswrapper[4871]: I0128 15:37:28.649268 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vp4mw" event={"ID":"eb7bc661-06ed-4edf-afd4-0d8b48c8b738","Type":"ContainerDied","Data":"3c6658e9d4aa5abbeb5b1ba30c3c2a14d447b9384d6905db70caa4064d8a56ff"} Jan 28 15:37:28 crc kubenswrapper[4871]: E0128 15:37:28.649925 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified\\\"\"" pod="openstack/ovn-controller-ovs-c2xpq" podUID="8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da" Jan 28 15:37:28 crc kubenswrapper[4871]: I0128 15:37:28.703551 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbtb6\" (UniqueName: \"kubernetes.io/projected/eb7bc661-06ed-4edf-afd4-0d8b48c8b738-kube-api-access-wbtb6\") pod \"eb7bc661-06ed-4edf-afd4-0d8b48c8b738\" (UID: \"eb7bc661-06ed-4edf-afd4-0d8b48c8b738\") " Jan 28 15:37:28 crc kubenswrapper[4871]: I0128 15:37:28.703795 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb7bc661-06ed-4edf-afd4-0d8b48c8b738-config\") pod \"eb7bc661-06ed-4edf-afd4-0d8b48c8b738\" (UID: \"eb7bc661-06ed-4edf-afd4-0d8b48c8b738\") " Jan 28 15:37:28 crc kubenswrapper[4871]: I0128 15:37:28.703854 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb7bc661-06ed-4edf-afd4-0d8b48c8b738-dns-svc\") pod \"eb7bc661-06ed-4edf-afd4-0d8b48c8b738\" (UID: \"eb7bc661-06ed-4edf-afd4-0d8b48c8b738\") " Jan 28 15:37:28 crc kubenswrapper[4871]: I0128 15:37:28.709648 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb7bc661-06ed-4edf-afd4-0d8b48c8b738-kube-api-access-wbtb6" (OuterVolumeSpecName: "kube-api-access-wbtb6") pod "eb7bc661-06ed-4edf-afd4-0d8b48c8b738" (UID: "eb7bc661-06ed-4edf-afd4-0d8b48c8b738"). InnerVolumeSpecName "kube-api-access-wbtb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:28 crc kubenswrapper[4871]: I0128 15:37:28.727337 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb7bc661-06ed-4edf-afd4-0d8b48c8b738-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb7bc661-06ed-4edf-afd4-0d8b48c8b738" (UID: "eb7bc661-06ed-4edf-afd4-0d8b48c8b738"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:28 crc kubenswrapper[4871]: I0128 15:37:28.727835 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb7bc661-06ed-4edf-afd4-0d8b48c8b738-config" (OuterVolumeSpecName: "config") pod "eb7bc661-06ed-4edf-afd4-0d8b48c8b738" (UID: "eb7bc661-06ed-4edf-afd4-0d8b48c8b738"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:28 crc kubenswrapper[4871]: I0128 15:37:28.805802 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb7bc661-06ed-4edf-afd4-0d8b48c8b738-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:28 crc kubenswrapper[4871]: I0128 15:37:28.805849 4871 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb7bc661-06ed-4edf-afd4-0d8b48c8b738-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:28 crc kubenswrapper[4871]: I0128 15:37:28.805865 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbtb6\" (UniqueName: \"kubernetes.io/projected/eb7bc661-06ed-4edf-afd4-0d8b48c8b738-kube-api-access-wbtb6\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:28 crc kubenswrapper[4871]: E0128 15:37:28.806227 4871 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Jan 28 15:37:28 crc kubenswrapper[4871]: E0128 15:37:28.806394 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n4h64chbh7fh5bch6fh654h668h89hf7h586h68bh5c6h94h5d7h5cfh67bh675h68fhbfh649hdch657h586h599h544h97hf8h6hb7h675hdfq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwzt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-2s4s6_openstack(10434904-135c-4ec2-a483-1647ce52500b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:28 crc kubenswrapper[4871]: E0128 15:37:28.807885 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-2s4s6" podUID="10434904-135c-4ec2-a483-1647ce52500b" Jan 28 15:37:28 crc kubenswrapper[4871]: E0128 15:37:28.826816 4871 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 28 15:37:28 crc kubenswrapper[4871]: E0128 15:37:28.827008 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vlvrv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(634ee164-2990-4b2b-88e4-ce901728e251): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:28 crc kubenswrapper[4871]: E0128 15:37:28.828207 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="634ee164-2990-4b2b-88e4-ce901728e251" Jan 28 15:37:28 crc kubenswrapper[4871]: I0128 15:37:28.993182 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vp4mw"] Jan 28 15:37:29 crc kubenswrapper[4871]: I0128 15:37:29.000031 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vp4mw"] Jan 28 15:37:29 crc kubenswrapper[4871]: E0128 15:37:29.047196 4871 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified" Jan 28 15:37:29 crc kubenswrapper[4871]: E0128 15:37:29.047424 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbdh5bfh654h98h5f6h56fh685hbh687h5c8h597h5b4h564hbbh6ch5cbh65dhcfhbfh658h95h699h564hc7hcdh657h569h66h655h5f9h65hfdq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jx7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(3fc41ff5-8884-408d-94ca-512e6c34e2d3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:29 crc kubenswrapper[4871]: E0128 15:37:29.213661 4871 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified" Jan 28 15:37:29 crc kubenswrapper[4871]: E0128 15:37:29.213877 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9fh96h9fh545h94h648h658h64bh596h5f4h5dch574h5b8h6fh569h86h689h54bh644h68dh55bh6ch5b7h68h5c9h54bh646h98h546h554hcch9bq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmjdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(8887227a-30f0-4a29-8018-2e18033b3b8f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:29 crc kubenswrapper[4871]: E0128 15:37:29.537657 4871 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Jan 28 15:37:29 crc kubenswrapper[4871]: E0128 15:37:29.537864 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n68ch694h76h5c4h5fbh686h669h54ch544h649h558hc8h4h67bh5b8hf9hd7hf8h8fh599h59h66fh6fh7h79hc4h7fh5bbh5cdh5dh54h56q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovs-rundir,ReadOnly:true,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-rundir,ReadOnly:true,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5gnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-metrics-nl92r_openstack(81859795-4888-4eae-8589-5a5a4992d584): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:29 crc kubenswrapper[4871]: E0128 15:37:29.539060 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-metrics-nl92r" podUID="81859795-4888-4eae-8589-5a5a4992d584" Jan 28 15:37:29 crc kubenswrapper[4871]: I0128 15:37:29.608772 4871 scope.go:117] "RemoveContainer" containerID="44d2d4072d59a05c60e20a6b18cc595aa65b753416f6f3bdf19b9a2621c79267" Jan 28 15:37:29 crc kubenswrapper[4871]: E0128 15:37:29.657529 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-2s4s6" podUID="10434904-135c-4ec2-a483-1647ce52500b" Jan 28 15:37:29 crc kubenswrapper[4871]: E0128 15:37:29.657558 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovn-controller-metrics-nl92r" podUID="81859795-4888-4eae-8589-5a5a4992d584" Jan 28 15:37:30 crc kubenswrapper[4871]: E0128 15:37:30.435191 4871 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 28 15:37:30 crc kubenswrapper[4871]: E0128 15:37:30.435552 4871 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 28 15:37:30 crc kubenswrapper[4871]: E0128 15:37:30.435745 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ltftr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(24325972-e640-4b7b-b5c9-215dd8cd0fea): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:37:30 crc kubenswrapper[4871]: E0128 15:37:30.437416 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="24325972-e640-4b7b-b5c9-215dd8cd0fea" Jan 28 15:37:30 crc kubenswrapper[4871]: I0128 15:37:30.803494 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" event={"ID":"3a3637bf-9cf3-47b6-ae0e-e77c0476e060","Type":"ContainerStarted","Data":"17c9c399373e0fdb84078263272743a958300787b1fa9a895c82bd72d598bfc1"} Jan 28 15:37:30 crc kubenswrapper[4871]: I0128 15:37:30.803618 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" Jan 28 15:37:30 crc kubenswrapper[4871]: I0128 15:37:30.807242 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-ngql8" event={"ID":"118fc506-61f1-42e4-ac40-1442d20e4708","Type":"ContainerStarted","Data":"21de8cfc362e3b88da13e65095ecf518dfb528d9c247c27d90c8de12039e1504"} Jan 28 15:37:30 crc kubenswrapper[4871]: I0128 15:37:30.807525 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-ngql8" Jan 28 15:37:30 crc kubenswrapper[4871]: E0128 15:37:30.808386 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="24325972-e640-4b7b-b5c9-215dd8cd0fea" Jan 28 15:37:30 crc kubenswrapper[4871]: I0128 15:37:30.829813 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" podStartSLOduration=15.291974025 podStartE2EDuration="16.829791152s" podCreationTimestamp="2026-01-28 15:37:14 +0000 UTC" firstStartedPulling="2026-01-28 15:37:15.034021405 +0000 UTC m=+1186.929859737" lastFinishedPulling="2026-01-28 15:37:16.571838542 +0000 UTC m=+1188.467676864" observedRunningTime="2026-01-28 15:37:30.817725471 +0000 UTC m=+1202.713563813" watchObservedRunningTime="2026-01-28 15:37:30.829791152 +0000 UTC m=+1202.725629474" Jan 28 15:37:30 crc kubenswrapper[4871]: I0128 15:37:30.859521 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-ngql8" podStartSLOduration=15.251551074 podStartE2EDuration="16.859506297s" podCreationTimestamp="2026-01-28 15:37:14 +0000 UTC" firstStartedPulling="2026-01-28 15:37:14.898230301 +0000 UTC m=+1186.794068623" lastFinishedPulling="2026-01-28 15:37:16.506185504 +0000 UTC m=+1188.402023846" observedRunningTime="2026-01-28 15:37:30.853192899 +0000 UTC m=+1202.749031231" watchObservedRunningTime="2026-01-28 15:37:30.859506297 +0000 UTC m=+1202.755344619" Jan 28 15:37:30 crc kubenswrapper[4871]: E0128 15:37:30.899205 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="8887227a-30f0-4a29-8018-2e18033b3b8f" Jan 28 15:37:30 crc kubenswrapper[4871]: E0128 15:37:30.909466 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="3fc41ff5-8884-408d-94ca-512e6c34e2d3" Jan 28 15:37:30 crc kubenswrapper[4871]: I0128 15:37:30.916519 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb7bc661-06ed-4edf-afd4-0d8b48c8b738" path="/var/lib/kubelet/pods/eb7bc661-06ed-4edf-afd4-0d8b48c8b738/volumes" Jan 28 15:37:31 crc kubenswrapper[4871]: I0128 15:37:31.850632 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"03065e0e-cdb6-49a2-bfe3-28236f770fdc","Type":"ContainerStarted","Data":"c3c7a90313ade728c955134d1689106d85e809643705da42d29b003d191c1dd5"} Jan 28 15:37:31 crc kubenswrapper[4871]: I0128 15:37:31.850714 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 28 15:37:31 crc kubenswrapper[4871]: I0128 15:37:31.852432 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"634ee164-2990-4b2b-88e4-ce901728e251","Type":"ContainerStarted","Data":"0d49f7a504331e99b4300616994326d80fdd9ceb567d7f8367a73055d1d2bd62"} Jan 28 15:37:31 crc kubenswrapper[4871]: I0128 15:37:31.854245 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3fc41ff5-8884-408d-94ca-512e6c34e2d3","Type":"ContainerStarted","Data":"a1997b1508eb818194f7644aef26aa48cd1d45a274511239be63b0b74c141b6e"} Jan 28 15:37:31 crc kubenswrapper[4871]: E0128 15:37:31.858294 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="3fc41ff5-8884-408d-94ca-512e6c34e2d3" Jan 28 15:37:31 crc kubenswrapper[4871]: I0128 15:37:31.859405 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8887227a-30f0-4a29-8018-2e18033b3b8f","Type":"ContainerStarted","Data":"43b62e19d9245f7c30996b0766d8078d400c43be21fa12b29038988d4680c229"} Jan 28 15:37:31 crc kubenswrapper[4871]: E0128 15:37:31.863396 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="8887227a-30f0-4a29-8018-2e18033b3b8f" Jan 28 15:37:31 crc kubenswrapper[4871]: I0128 15:37:31.866384 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"7241d8aa-248e-46a8-88af-365415f843f8","Type":"ContainerStarted","Data":"70bf65d42b4fad923aeead68d15b65f55fb402c452e48fdcf05564b881a19cce"} Jan 28 15:37:31 crc kubenswrapper[4871]: I0128 15:37:31.889731 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=19.646223307 podStartE2EDuration="36.889709701s" podCreationTimestamp="2026-01-28 15:36:55 +0000 UTC" firstStartedPulling="2026-01-28 15:37:12.109527763 +0000 UTC m=+1184.005366085" lastFinishedPulling="2026-01-28 15:37:29.353014157 +0000 UTC m=+1201.248852479" observedRunningTime="2026-01-28 15:37:31.887831572 +0000 UTC m=+1203.783669894" watchObservedRunningTime="2026-01-28 15:37:31.889709701 +0000 UTC m=+1203.785548023" Jan 28 15:37:32 crc kubenswrapper[4871]: I0128 15:37:32.873129 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"48f16980-86d0-4648-9ebd-a428b5253832","Type":"ContainerStarted","Data":"997dac3993900059d2b662b7d41e85dc593e867353b7064b650a90e524340637"} Jan 28 15:37:32 crc kubenswrapper[4871]: I0128 15:37:32.875302 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2","Type":"ContainerStarted","Data":"c53190f3262b5642f7ee119e0f5ec89c6c9f1943ff1790f4cbd3711f966b2bf8"} Jan 28 15:37:32 crc kubenswrapper[4871]: E0128 15:37:32.878017 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="8887227a-30f0-4a29-8018-2e18033b3b8f" Jan 28 15:37:32 crc kubenswrapper[4871]: E0128 15:37:32.878037 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="3fc41ff5-8884-408d-94ca-512e6c34e2d3" Jan 28 15:37:34 crc kubenswrapper[4871]: I0128 15:37:34.567466 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" Jan 28 15:37:34 crc kubenswrapper[4871]: I0128 15:37:34.723404 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-ngql8"] Jan 28 15:37:34 crc kubenswrapper[4871]: I0128 15:37:34.723653 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-ngql8" podUID="118fc506-61f1-42e4-ac40-1442d20e4708" containerName="dnsmasq-dns" containerID="cri-o://21de8cfc362e3b88da13e65095ecf518dfb528d9c247c27d90c8de12039e1504" gracePeriod=10 Jan 28 15:37:34 crc kubenswrapper[4871]: I0128 15:37:34.890715 4871 generic.go:334] "Generic (PLEG): container finished" podID="118fc506-61f1-42e4-ac40-1442d20e4708" containerID="21de8cfc362e3b88da13e65095ecf518dfb528d9c247c27d90c8de12039e1504" exitCode=0 Jan 28 15:37:34 crc kubenswrapper[4871]: I0128 15:37:34.890799 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-ngql8" event={"ID":"118fc506-61f1-42e4-ac40-1442d20e4708","Type":"ContainerDied","Data":"21de8cfc362e3b88da13e65095ecf518dfb528d9c247c27d90c8de12039e1504"} Jan 28 15:37:35 crc kubenswrapper[4871]: I0128 15:37:35.422222 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-ngql8" Jan 28 15:37:35 crc kubenswrapper[4871]: I0128 15:37:35.540111 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/118fc506-61f1-42e4-ac40-1442d20e4708-ovsdbserver-nb\") pod \"118fc506-61f1-42e4-ac40-1442d20e4708\" (UID: \"118fc506-61f1-42e4-ac40-1442d20e4708\") " Jan 28 15:37:35 crc kubenswrapper[4871]: I0128 15:37:35.540474 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q26z\" (UniqueName: \"kubernetes.io/projected/118fc506-61f1-42e4-ac40-1442d20e4708-kube-api-access-4q26z\") pod \"118fc506-61f1-42e4-ac40-1442d20e4708\" (UID: \"118fc506-61f1-42e4-ac40-1442d20e4708\") " Jan 28 15:37:35 crc kubenswrapper[4871]: I0128 15:37:35.540520 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/118fc506-61f1-42e4-ac40-1442d20e4708-config\") pod \"118fc506-61f1-42e4-ac40-1442d20e4708\" (UID: \"118fc506-61f1-42e4-ac40-1442d20e4708\") " Jan 28 15:37:35 crc kubenswrapper[4871]: I0128 15:37:35.540546 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/118fc506-61f1-42e4-ac40-1442d20e4708-dns-svc\") pod \"118fc506-61f1-42e4-ac40-1442d20e4708\" (UID: \"118fc506-61f1-42e4-ac40-1442d20e4708\") " Jan 28 15:37:35 crc kubenswrapper[4871]: I0128 15:37:35.555881 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/118fc506-61f1-42e4-ac40-1442d20e4708-kube-api-access-4q26z" (OuterVolumeSpecName: "kube-api-access-4q26z") pod "118fc506-61f1-42e4-ac40-1442d20e4708" (UID: "118fc506-61f1-42e4-ac40-1442d20e4708"). InnerVolumeSpecName "kube-api-access-4q26z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:35 crc kubenswrapper[4871]: I0128 15:37:35.575687 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/118fc506-61f1-42e4-ac40-1442d20e4708-config" (OuterVolumeSpecName: "config") pod "118fc506-61f1-42e4-ac40-1442d20e4708" (UID: "118fc506-61f1-42e4-ac40-1442d20e4708"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:35 crc kubenswrapper[4871]: I0128 15:37:35.581878 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/118fc506-61f1-42e4-ac40-1442d20e4708-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "118fc506-61f1-42e4-ac40-1442d20e4708" (UID: "118fc506-61f1-42e4-ac40-1442d20e4708"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:35 crc kubenswrapper[4871]: I0128 15:37:35.589252 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/118fc506-61f1-42e4-ac40-1442d20e4708-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "118fc506-61f1-42e4-ac40-1442d20e4708" (UID: "118fc506-61f1-42e4-ac40-1442d20e4708"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:35 crc kubenswrapper[4871]: I0128 15:37:35.642291 4871 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/118fc506-61f1-42e4-ac40-1442d20e4708-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:35 crc kubenswrapper[4871]: I0128 15:37:35.642331 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4q26z\" (UniqueName: \"kubernetes.io/projected/118fc506-61f1-42e4-ac40-1442d20e4708-kube-api-access-4q26z\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:35 crc kubenswrapper[4871]: I0128 15:37:35.642348 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/118fc506-61f1-42e4-ac40-1442d20e4708-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:35 crc kubenswrapper[4871]: I0128 15:37:35.642359 4871 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/118fc506-61f1-42e4-ac40-1442d20e4708-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:35 crc kubenswrapper[4871]: I0128 15:37:35.907492 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-ngql8" event={"ID":"118fc506-61f1-42e4-ac40-1442d20e4708","Type":"ContainerDied","Data":"49d6a2111a6c7d0c822a7991daddc86544d016299ab288f30a11d847ed659b16"} Jan 28 15:37:35 crc kubenswrapper[4871]: I0128 15:37:35.907535 4871 scope.go:117] "RemoveContainer" containerID="21de8cfc362e3b88da13e65095ecf518dfb528d9c247c27d90c8de12039e1504" Jan 28 15:37:35 crc kubenswrapper[4871]: I0128 15:37:35.907534 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-ngql8" Jan 28 15:37:35 crc kubenswrapper[4871]: I0128 15:37:35.928903 4871 scope.go:117] "RemoveContainer" containerID="1767e6abe2cd8134c3d6b059161c0fcd3e4913921a434914ec37dd90cbc9a045" Jan 28 15:37:35 crc kubenswrapper[4871]: I0128 15:37:35.946770 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-ngql8"] Jan 28 15:37:35 crc kubenswrapper[4871]: I0128 15:37:35.953936 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-ngql8"] Jan 28 15:37:36 crc kubenswrapper[4871]: I0128 15:37:36.142792 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 28 15:37:36 crc kubenswrapper[4871]: I0128 15:37:36.916815 4871 generic.go:334] "Generic (PLEG): container finished" podID="634ee164-2990-4b2b-88e4-ce901728e251" containerID="0d49f7a504331e99b4300616994326d80fdd9ceb567d7f8367a73055d1d2bd62" exitCode=0 Jan 28 15:37:36 crc kubenswrapper[4871]: I0128 15:37:36.918995 4871 generic.go:334] "Generic (PLEG): container finished" podID="7241d8aa-248e-46a8-88af-365415f843f8" containerID="70bf65d42b4fad923aeead68d15b65f55fb402c452e48fdcf05564b881a19cce" exitCode=0 Jan 28 15:37:36 crc kubenswrapper[4871]: I0128 15:37:36.919966 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="118fc506-61f1-42e4-ac40-1442d20e4708" path="/var/lib/kubelet/pods/118fc506-61f1-42e4-ac40-1442d20e4708/volumes" Jan 28 15:37:36 crc kubenswrapper[4871]: I0128 15:37:36.920506 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"634ee164-2990-4b2b-88e4-ce901728e251","Type":"ContainerDied","Data":"0d49f7a504331e99b4300616994326d80fdd9ceb567d7f8367a73055d1d2bd62"} Jan 28 15:37:36 crc kubenswrapper[4871]: I0128 15:37:36.920537 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"7241d8aa-248e-46a8-88af-365415f843f8","Type":"ContainerDied","Data":"70bf65d42b4fad923aeead68d15b65f55fb402c452e48fdcf05564b881a19cce"} Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.626729 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-ncpsd"] Jan 28 15:37:37 crc kubenswrapper[4871]: E0128 15:37:37.627107 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118fc506-61f1-42e4-ac40-1442d20e4708" containerName="init" Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.627131 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="118fc506-61f1-42e4-ac40-1442d20e4708" containerName="init" Jan 28 15:37:37 crc kubenswrapper[4871]: E0128 15:37:37.627144 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118fc506-61f1-42e4-ac40-1442d20e4708" containerName="dnsmasq-dns" Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.627155 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="118fc506-61f1-42e4-ac40-1442d20e4708" containerName="dnsmasq-dns" Jan 28 15:37:37 crc kubenswrapper[4871]: E0128 15:37:37.627168 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1a59c5-9648-4b50-9131-83a436f5e6cf" containerName="init" Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.627176 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1a59c5-9648-4b50-9131-83a436f5e6cf" containerName="init" Jan 28 15:37:37 crc kubenswrapper[4871]: E0128 15:37:37.627192 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb7bc661-06ed-4edf-afd4-0d8b48c8b738" containerName="init" Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.627200 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb7bc661-06ed-4edf-afd4-0d8b48c8b738" containerName="init" Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.627400 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb7bc661-06ed-4edf-afd4-0d8b48c8b738" containerName="init" Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.627433 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f1a59c5-9648-4b50-9131-83a436f5e6cf" containerName="init" Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.627470 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="118fc506-61f1-42e4-ac40-1442d20e4708" containerName="dnsmasq-dns" Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.628341 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-ncpsd" Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.650050 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-ncpsd"] Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.676094 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25df56c1-4cb1-4018-9477-8f7b2b4f1410-dns-svc\") pod \"dnsmasq-dns-698758b865-ncpsd\" (UID: \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\") " pod="openstack/dnsmasq-dns-698758b865-ncpsd" Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.676144 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25df56c1-4cb1-4018-9477-8f7b2b4f1410-config\") pod \"dnsmasq-dns-698758b865-ncpsd\" (UID: \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\") " pod="openstack/dnsmasq-dns-698758b865-ncpsd" Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.676251 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25df56c1-4cb1-4018-9477-8f7b2b4f1410-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-ncpsd\" (UID: \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\") " pod="openstack/dnsmasq-dns-698758b865-ncpsd" Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.676269 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvhx5\" (UniqueName: \"kubernetes.io/projected/25df56c1-4cb1-4018-9477-8f7b2b4f1410-kube-api-access-mvhx5\") pod \"dnsmasq-dns-698758b865-ncpsd\" (UID: \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\") " pod="openstack/dnsmasq-dns-698758b865-ncpsd" Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.676310 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25df56c1-4cb1-4018-9477-8f7b2b4f1410-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-ncpsd\" (UID: \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\") " pod="openstack/dnsmasq-dns-698758b865-ncpsd" Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.777380 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25df56c1-4cb1-4018-9477-8f7b2b4f1410-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-ncpsd\" (UID: \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\") " pod="openstack/dnsmasq-dns-698758b865-ncpsd" Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.777425 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvhx5\" (UniqueName: \"kubernetes.io/projected/25df56c1-4cb1-4018-9477-8f7b2b4f1410-kube-api-access-mvhx5\") pod \"dnsmasq-dns-698758b865-ncpsd\" (UID: \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\") " pod="openstack/dnsmasq-dns-698758b865-ncpsd" Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.777457 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25df56c1-4cb1-4018-9477-8f7b2b4f1410-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-ncpsd\" (UID: \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\") " pod="openstack/dnsmasq-dns-698758b865-ncpsd" Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.777507 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25df56c1-4cb1-4018-9477-8f7b2b4f1410-dns-svc\") pod \"dnsmasq-dns-698758b865-ncpsd\" (UID: \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\") " pod="openstack/dnsmasq-dns-698758b865-ncpsd" Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.777555 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25df56c1-4cb1-4018-9477-8f7b2b4f1410-config\") pod \"dnsmasq-dns-698758b865-ncpsd\" (UID: \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\") " pod="openstack/dnsmasq-dns-698758b865-ncpsd" Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.778383 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25df56c1-4cb1-4018-9477-8f7b2b4f1410-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-ncpsd\" (UID: \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\") " pod="openstack/dnsmasq-dns-698758b865-ncpsd" Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.778384 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25df56c1-4cb1-4018-9477-8f7b2b4f1410-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-ncpsd\" (UID: \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\") " pod="openstack/dnsmasq-dns-698758b865-ncpsd" Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.778456 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25df56c1-4cb1-4018-9477-8f7b2b4f1410-dns-svc\") pod \"dnsmasq-dns-698758b865-ncpsd\" (UID: \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\") " pod="openstack/dnsmasq-dns-698758b865-ncpsd" Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.778710 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25df56c1-4cb1-4018-9477-8f7b2b4f1410-config\") pod \"dnsmasq-dns-698758b865-ncpsd\" (UID: \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\") " pod="openstack/dnsmasq-dns-698758b865-ncpsd" Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.803736 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvhx5\" (UniqueName: \"kubernetes.io/projected/25df56c1-4cb1-4018-9477-8f7b2b4f1410-kube-api-access-mvhx5\") pod \"dnsmasq-dns-698758b865-ncpsd\" (UID: \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\") " pod="openstack/dnsmasq-dns-698758b865-ncpsd" Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.930447 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"634ee164-2990-4b2b-88e4-ce901728e251","Type":"ContainerStarted","Data":"7f45c36eb34d0cb1db5116acbdb4ba214ffc637cbd0eba40b8f7dd8f6963537d"} Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.932775 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"7241d8aa-248e-46a8-88af-365415f843f8","Type":"ContainerStarted","Data":"dfb52425569f5d1a69025e9e0f46d8b82a04c5f56593b386e6740e0a9ff68811"} Jan 28 15:37:37 crc kubenswrapper[4871]: I0128 15:37:37.944188 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-ncpsd" Jan 28 15:37:38 crc kubenswrapper[4871]: I0128 15:37:38.003478 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371992.851313 podStartE2EDuration="44.003462543s" podCreationTimestamp="2026-01-28 15:36:54 +0000 UTC" firstStartedPulling="2026-01-28 15:37:12.044199966 +0000 UTC m=+1183.940038288" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:37:37.980845432 +0000 UTC m=+1209.876683754" watchObservedRunningTime="2026-01-28 15:37:38.003462543 +0000 UTC m=+1209.899300855" Jan 28 15:37:38 crc kubenswrapper[4871]: I0128 15:37:38.034860 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=27.360739209 podStartE2EDuration="45.034839971s" podCreationTimestamp="2026-01-28 15:36:53 +0000 UTC" firstStartedPulling="2026-01-28 15:37:12.072784416 +0000 UTC m=+1183.968622738" lastFinishedPulling="2026-01-28 15:37:29.746885178 +0000 UTC m=+1201.642723500" observedRunningTime="2026-01-28 15:37:38.034664406 +0000 UTC m=+1209.930502738" watchObservedRunningTime="2026-01-28 15:37:38.034839971 +0000 UTC m=+1209.930678303" Jan 28 15:37:38 crc kubenswrapper[4871]: I0128 15:37:38.495964 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-ncpsd"] Jan 28 15:37:38 crc kubenswrapper[4871]: I0128 15:37:38.804660 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 28 15:37:38 crc kubenswrapper[4871]: I0128 15:37:38.809500 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 15:37:38 crc kubenswrapper[4871]: I0128 15:37:38.811033 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 28 15:37:38 crc kubenswrapper[4871]: I0128 15:37:38.811422 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-pnbd8" Jan 28 15:37:38 crc kubenswrapper[4871]: I0128 15:37:38.812109 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 28 15:37:38 crc kubenswrapper[4871]: I0128 15:37:38.813208 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 28 15:37:38 crc kubenswrapper[4871]: I0128 15:37:38.831012 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 28 15:37:38 crc kubenswrapper[4871]: I0128 15:37:38.899740 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"a6e17493-c4b5-417e-b5b2-42a1a245447e\") " pod="openstack/swift-storage-0" Jan 28 15:37:38 crc kubenswrapper[4871]: I0128 15:37:38.899803 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e17493-c4b5-417e-b5b2-42a1a245447e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"a6e17493-c4b5-417e-b5b2-42a1a245447e\") " pod="openstack/swift-storage-0" Jan 28 15:37:38 crc kubenswrapper[4871]: I0128 15:37:38.899829 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6e17493-c4b5-417e-b5b2-42a1a245447e-etc-swift\") pod \"swift-storage-0\" (UID: \"a6e17493-c4b5-417e-b5b2-42a1a245447e\") " pod="openstack/swift-storage-0" Jan 28 15:37:38 crc kubenswrapper[4871]: I0128 15:37:38.899863 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a6e17493-c4b5-417e-b5b2-42a1a245447e-cache\") pod \"swift-storage-0\" (UID: \"a6e17493-c4b5-417e-b5b2-42a1a245447e\") " pod="openstack/swift-storage-0" Jan 28 15:37:38 crc kubenswrapper[4871]: I0128 15:37:38.899882 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a6e17493-c4b5-417e-b5b2-42a1a245447e-lock\") pod \"swift-storage-0\" (UID: \"a6e17493-c4b5-417e-b5b2-42a1a245447e\") " pod="openstack/swift-storage-0" Jan 28 15:37:38 crc kubenswrapper[4871]: I0128 15:37:38.899908 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx4x5\" (UniqueName: \"kubernetes.io/projected/a6e17493-c4b5-417e-b5b2-42a1a245447e-kube-api-access-fx4x5\") pod \"swift-storage-0\" (UID: \"a6e17493-c4b5-417e-b5b2-42a1a245447e\") " pod="openstack/swift-storage-0" Jan 28 15:37:38 crc kubenswrapper[4871]: I0128 15:37:38.940056 4871 generic.go:334] "Generic (PLEG): container finished" podID="25df56c1-4cb1-4018-9477-8f7b2b4f1410" containerID="a168f71adb7b17620d61493b4713d0933e0fe77a619ded51a954b0d27fc1be10" exitCode=0 Jan 28 15:37:38 crc kubenswrapper[4871]: I0128 15:37:38.940110 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-ncpsd" event={"ID":"25df56c1-4cb1-4018-9477-8f7b2b4f1410","Type":"ContainerDied","Data":"a168f71adb7b17620d61493b4713d0933e0fe77a619ded51a954b0d27fc1be10"} Jan 28 15:37:38 crc kubenswrapper[4871]: I0128 15:37:38.940161 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-ncpsd" event={"ID":"25df56c1-4cb1-4018-9477-8f7b2b4f1410","Type":"ContainerStarted","Data":"beddbd2d387230c0f15938fac3be563632a51d32c305d6de1c5983bc46785c8a"} Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.001433 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a6e17493-c4b5-417e-b5b2-42a1a245447e-cache\") pod \"swift-storage-0\" (UID: \"a6e17493-c4b5-417e-b5b2-42a1a245447e\") " pod="openstack/swift-storage-0" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.001481 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a6e17493-c4b5-417e-b5b2-42a1a245447e-lock\") pod \"swift-storage-0\" (UID: \"a6e17493-c4b5-417e-b5b2-42a1a245447e\") " pod="openstack/swift-storage-0" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.001536 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx4x5\" (UniqueName: \"kubernetes.io/projected/a6e17493-c4b5-417e-b5b2-42a1a245447e-kube-api-access-fx4x5\") pod \"swift-storage-0\" (UID: \"a6e17493-c4b5-417e-b5b2-42a1a245447e\") " pod="openstack/swift-storage-0" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.001662 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"a6e17493-c4b5-417e-b5b2-42a1a245447e\") " pod="openstack/swift-storage-0" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.001726 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e17493-c4b5-417e-b5b2-42a1a245447e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"a6e17493-c4b5-417e-b5b2-42a1a245447e\") " pod="openstack/swift-storage-0" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.001764 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6e17493-c4b5-417e-b5b2-42a1a245447e-etc-swift\") pod \"swift-storage-0\" (UID: \"a6e17493-c4b5-417e-b5b2-42a1a245447e\") " pod="openstack/swift-storage-0" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.002034 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a6e17493-c4b5-417e-b5b2-42a1a245447e-cache\") pod \"swift-storage-0\" (UID: \"a6e17493-c4b5-417e-b5b2-42a1a245447e\") " pod="openstack/swift-storage-0" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.002508 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a6e17493-c4b5-417e-b5b2-42a1a245447e-lock\") pod \"swift-storage-0\" (UID: \"a6e17493-c4b5-417e-b5b2-42a1a245447e\") " pod="openstack/swift-storage-0" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.003146 4871 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"a6e17493-c4b5-417e-b5b2-42a1a245447e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/swift-storage-0" Jan 28 15:37:39 crc kubenswrapper[4871]: E0128 15:37:39.003653 4871 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 15:37:39 crc kubenswrapper[4871]: E0128 15:37:39.003680 4871 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 15:37:39 crc kubenswrapper[4871]: E0128 15:37:39.003724 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6e17493-c4b5-417e-b5b2-42a1a245447e-etc-swift podName:a6e17493-c4b5-417e-b5b2-42a1a245447e nodeName:}" failed. No retries permitted until 2026-01-28 15:37:39.503710025 +0000 UTC m=+1211.399548347 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a6e17493-c4b5-417e-b5b2-42a1a245447e-etc-swift") pod "swift-storage-0" (UID: "a6e17493-c4b5-417e-b5b2-42a1a245447e") : configmap "swift-ring-files" not found Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.006512 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e17493-c4b5-417e-b5b2-42a1a245447e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"a6e17493-c4b5-417e-b5b2-42a1a245447e\") " pod="openstack/swift-storage-0" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.023513 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx4x5\" (UniqueName: \"kubernetes.io/projected/a6e17493-c4b5-417e-b5b2-42a1a245447e-kube-api-access-fx4x5\") pod \"swift-storage-0\" (UID: \"a6e17493-c4b5-417e-b5b2-42a1a245447e\") " pod="openstack/swift-storage-0" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.024967 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"a6e17493-c4b5-417e-b5b2-42a1a245447e\") " pod="openstack/swift-storage-0" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.462800 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-4jm4z"] Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.465434 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.467847 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.468491 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.471267 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.490532 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4jm4z"] Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.508931 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6e17493-c4b5-417e-b5b2-42a1a245447e-etc-swift\") pod \"swift-storage-0\" (UID: \"a6e17493-c4b5-417e-b5b2-42a1a245447e\") " pod="openstack/swift-storage-0" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.508983 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-combined-ca-bundle\") pod \"swift-ring-rebalance-4jm4z\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.509046 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-etc-swift\") pod \"swift-ring-rebalance-4jm4z\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.509105 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-ring-data-devices\") pod \"swift-ring-rebalance-4jm4z\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.509128 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-dispersionconf\") pod \"swift-ring-rebalance-4jm4z\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:39 crc kubenswrapper[4871]: E0128 15:37:39.509149 4871 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.509167 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpff7\" (UniqueName: \"kubernetes.io/projected/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-kube-api-access-lpff7\") pod \"swift-ring-rebalance-4jm4z\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.509191 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-swiftconf\") pod \"swift-ring-rebalance-4jm4z\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.509213 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-scripts\") pod \"swift-ring-rebalance-4jm4z\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:39 crc kubenswrapper[4871]: E0128 15:37:39.509171 4871 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 15:37:39 crc kubenswrapper[4871]: E0128 15:37:39.509340 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6e17493-c4b5-417e-b5b2-42a1a245447e-etc-swift podName:a6e17493-c4b5-417e-b5b2-42a1a245447e nodeName:}" failed. No retries permitted until 2026-01-28 15:37:40.509321944 +0000 UTC m=+1212.405160266 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a6e17493-c4b5-417e-b5b2-42a1a245447e-etc-swift") pod "swift-storage-0" (UID: "a6e17493-c4b5-417e-b5b2-42a1a245447e") : configmap "swift-ring-files" not found Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.610263 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-swiftconf\") pod \"swift-ring-rebalance-4jm4z\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.610310 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-scripts\") pod \"swift-ring-rebalance-4jm4z\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.610397 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-combined-ca-bundle\") pod \"swift-ring-rebalance-4jm4z\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.610455 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-etc-swift\") pod \"swift-ring-rebalance-4jm4z\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.610486 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-ring-data-devices\") pod \"swift-ring-rebalance-4jm4z\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.610507 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-dispersionconf\") pod \"swift-ring-rebalance-4jm4z\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.610544 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpff7\" (UniqueName: \"kubernetes.io/projected/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-kube-api-access-lpff7\") pod \"swift-ring-rebalance-4jm4z\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.611907 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-etc-swift\") pod \"swift-ring-rebalance-4jm4z\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.612193 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-ring-data-devices\") pod \"swift-ring-rebalance-4jm4z\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.612648 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-scripts\") pod \"swift-ring-rebalance-4jm4z\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.614699 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-combined-ca-bundle\") pod \"swift-ring-rebalance-4jm4z\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.615734 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-dispersionconf\") pod \"swift-ring-rebalance-4jm4z\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.623220 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-swiftconf\") pod \"swift-ring-rebalance-4jm4z\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.629256 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpff7\" (UniqueName: \"kubernetes.io/projected/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-kube-api-access-lpff7\") pod \"swift-ring-rebalance-4jm4z\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.782384 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.949953 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-ncpsd" event={"ID":"25df56c1-4cb1-4018-9477-8f7b2b4f1410","Type":"ContainerStarted","Data":"a907faa39ffd998edd2434303c38467a92356e4b6ccf376f4017bb2cd6a22426"} Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.951251 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-ncpsd" Jan 28 15:37:39 crc kubenswrapper[4871]: I0128 15:37:39.973951 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-ncpsd" podStartSLOduration=2.973930651 podStartE2EDuration="2.973930651s" podCreationTimestamp="2026-01-28 15:37:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:37:39.970763961 +0000 UTC m=+1211.866602283" watchObservedRunningTime="2026-01-28 15:37:39.973930651 +0000 UTC m=+1211.869768973" Jan 28 15:37:40 crc kubenswrapper[4871]: I0128 15:37:40.042127 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4jm4z"] Jan 28 15:37:40 crc kubenswrapper[4871]: I0128 15:37:40.526797 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6e17493-c4b5-417e-b5b2-42a1a245447e-etc-swift\") pod \"swift-storage-0\" (UID: \"a6e17493-c4b5-417e-b5b2-42a1a245447e\") " pod="openstack/swift-storage-0" Jan 28 15:37:40 crc kubenswrapper[4871]: E0128 15:37:40.526983 4871 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 15:37:40 crc kubenswrapper[4871]: E0128 15:37:40.527008 4871 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 15:37:40 crc kubenswrapper[4871]: E0128 15:37:40.527061 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6e17493-c4b5-417e-b5b2-42a1a245447e-etc-swift podName:a6e17493-c4b5-417e-b5b2-42a1a245447e nodeName:}" failed. No retries permitted until 2026-01-28 15:37:42.527044795 +0000 UTC m=+1214.422883117 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a6e17493-c4b5-417e-b5b2-42a1a245447e-etc-swift") pod "swift-storage-0" (UID: "a6e17493-c4b5-417e-b5b2-42a1a245447e") : configmap "swift-ring-files" not found Jan 28 15:37:40 crc kubenswrapper[4871]: I0128 15:37:40.963119 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4jm4z" event={"ID":"b5a578f5-c09e-40cd-b9b6-36b7b1f61370","Type":"ContainerStarted","Data":"1272d111416687a400ed50954de483d9902c77ed558665b22f8484ebb96c3462"} Jan 28 15:37:41 crc kubenswrapper[4871]: I0128 15:37:41.971242 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2s4s6" event={"ID":"10434904-135c-4ec2-a483-1647ce52500b","Type":"ContainerStarted","Data":"b428c01d49b0cd55e47466cbfe3a08c7b466d7d790803bc0e4fca86ea00bdafd"} Jan 28 15:37:41 crc kubenswrapper[4871]: I0128 15:37:41.973799 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-2s4s6" Jan 28 15:37:41 crc kubenswrapper[4871]: I0128 15:37:41.976008 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c2xpq" event={"ID":"8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da","Type":"ContainerStarted","Data":"daeb504b583470f319a6fe7b7e470b825803335b477c16a851ab47722631defa"} Jan 28 15:37:41 crc kubenswrapper[4871]: I0128 15:37:41.995160 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-2s4s6" podStartSLOduration=12.166606504 podStartE2EDuration="40.995137316s" podCreationTimestamp="2026-01-28 15:37:01 +0000 UTC" firstStartedPulling="2026-01-28 15:37:12.481818224 +0000 UTC m=+1184.377656536" lastFinishedPulling="2026-01-28 15:37:41.310349026 +0000 UTC m=+1213.206187348" observedRunningTime="2026-01-28 15:37:41.991708517 +0000 UTC m=+1213.887546839" watchObservedRunningTime="2026-01-28 15:37:41.995137316 +0000 UTC m=+1213.890975638" Jan 28 15:37:42 crc kubenswrapper[4871]: I0128 15:37:42.624201 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6e17493-c4b5-417e-b5b2-42a1a245447e-etc-swift\") pod \"swift-storage-0\" (UID: \"a6e17493-c4b5-417e-b5b2-42a1a245447e\") " pod="openstack/swift-storage-0" Jan 28 15:37:42 crc kubenswrapper[4871]: E0128 15:37:42.624865 4871 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 15:37:42 crc kubenswrapper[4871]: E0128 15:37:42.624984 4871 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 15:37:42 crc kubenswrapper[4871]: E0128 15:37:42.625084 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6e17493-c4b5-417e-b5b2-42a1a245447e-etc-swift podName:a6e17493-c4b5-417e-b5b2-42a1a245447e nodeName:}" failed. No retries permitted until 2026-01-28 15:37:46.625069318 +0000 UTC m=+1218.520907640 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a6e17493-c4b5-417e-b5b2-42a1a245447e-etc-swift") pod "swift-storage-0" (UID: "a6e17493-c4b5-417e-b5b2-42a1a245447e") : configmap "swift-ring-files" not found Jan 28 15:37:42 crc kubenswrapper[4871]: I0128 15:37:42.989577 4871 generic.go:334] "Generic (PLEG): container finished" podID="8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da" containerID="daeb504b583470f319a6fe7b7e470b825803335b477c16a851ab47722631defa" exitCode=0 Jan 28 15:37:42 crc kubenswrapper[4871]: I0128 15:37:42.989742 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c2xpq" event={"ID":"8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da","Type":"ContainerDied","Data":"daeb504b583470f319a6fe7b7e470b825803335b477c16a851ab47722631defa"} Jan 28 15:37:43 crc kubenswrapper[4871]: I0128 15:37:43.815890 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:37:43 crc kubenswrapper[4871]: I0128 15:37:43.815941 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:37:44 crc kubenswrapper[4871]: I0128 15:37:44.510990 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 28 15:37:44 crc kubenswrapper[4871]: I0128 15:37:44.511643 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 28 15:37:44 crc kubenswrapper[4871]: I0128 15:37:44.615926 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 28 15:37:45 crc kubenswrapper[4871]: I0128 15:37:45.018681 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c2xpq" event={"ID":"8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da","Type":"ContainerStarted","Data":"068a45eb7e9d33344dcbc73f32bd92fda91c6f543d96dae2d3a631aff227f363"} Jan 28 15:37:45 crc kubenswrapper[4871]: I0128 15:37:45.117617 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 28 15:37:45 crc kubenswrapper[4871]: I0128 15:37:45.748723 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 28 15:37:45 crc kubenswrapper[4871]: I0128 15:37:45.749145 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 28 15:37:45 crc kubenswrapper[4871]: I0128 15:37:45.824442 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c48d-account-create-update-6plbm"] Jan 28 15:37:45 crc kubenswrapper[4871]: I0128 15:37:45.825416 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c48d-account-create-update-6plbm" Jan 28 15:37:45 crc kubenswrapper[4871]: I0128 15:37:45.830277 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 28 15:37:45 crc kubenswrapper[4871]: I0128 15:37:45.848404 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-dlmwk"] Jan 28 15:37:45 crc kubenswrapper[4871]: I0128 15:37:45.851951 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dlmwk" Jan 28 15:37:45 crc kubenswrapper[4871]: I0128 15:37:45.877630 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-dlmwk"] Jan 28 15:37:45 crc kubenswrapper[4871]: I0128 15:37:45.898880 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 28 15:37:45 crc kubenswrapper[4871]: I0128 15:37:45.911178 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c48d-account-create-update-6plbm"] Jan 28 15:37:45 crc kubenswrapper[4871]: I0128 15:37:45.948472 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl8tv\" (UniqueName: \"kubernetes.io/projected/2dc6911f-8892-4a6a-99c8-d9cee0eac352-kube-api-access-kl8tv\") pod \"keystone-db-create-dlmwk\" (UID: \"2dc6911f-8892-4a6a-99c8-d9cee0eac352\") " pod="openstack/keystone-db-create-dlmwk" Jan 28 15:37:45 crc kubenswrapper[4871]: I0128 15:37:45.948702 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pltlc\" (UniqueName: \"kubernetes.io/projected/cf5794fa-0b86-4d4a-a9ba-500a4834e315-kube-api-access-pltlc\") pod \"keystone-c48d-account-create-update-6plbm\" (UID: \"cf5794fa-0b86-4d4a-a9ba-500a4834e315\") " pod="openstack/keystone-c48d-account-create-update-6plbm" Jan 28 15:37:45 crc kubenswrapper[4871]: I0128 15:37:45.948763 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc6911f-8892-4a6a-99c8-d9cee0eac352-operator-scripts\") pod \"keystone-db-create-dlmwk\" (UID: \"2dc6911f-8892-4a6a-99c8-d9cee0eac352\") " pod="openstack/keystone-db-create-dlmwk" Jan 28 15:37:45 crc kubenswrapper[4871]: I0128 15:37:45.948795 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf5794fa-0b86-4d4a-a9ba-500a4834e315-operator-scripts\") pod \"keystone-c48d-account-create-update-6plbm\" (UID: \"cf5794fa-0b86-4d4a-a9ba-500a4834e315\") " pod="openstack/keystone-c48d-account-create-update-6plbm" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.027651 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-nl92r" event={"ID":"81859795-4888-4eae-8589-5a5a4992d584","Type":"ContainerStarted","Data":"f0e6ad06b6d5001c36ed59ea81e286e6cfbf802870d831f1c2781a47e5858940"} Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.029071 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4jm4z" event={"ID":"b5a578f5-c09e-40cd-b9b6-36b7b1f61370","Type":"ContainerStarted","Data":"100853051cefc922a33ccaa7920e973ecc01eddc46339cf81ba93b62359e560e"} Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.032018 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3fc41ff5-8884-408d-94ca-512e6c34e2d3","Type":"ContainerStarted","Data":"6b92cb02496506a5a0d945abf468ac266470c9462ac495c865e471ab68ead372"} Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.036892 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c2xpq" event={"ID":"8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da","Type":"ContainerStarted","Data":"3644a084f82c254dc7ea9dfdb97db8d4f24814d640594aa1c51225353ea90419"} Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.037046 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-c2xpq" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.037280 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-c2xpq" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.040730 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8887227a-30f0-4a29-8018-2e18033b3b8f","Type":"ContainerStarted","Data":"dfc6a81e50f9bf3d4109991f5fe635c68c2373f15ca942571474ca2f79e647f9"} Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.053924 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl8tv\" (UniqueName: \"kubernetes.io/projected/2dc6911f-8892-4a6a-99c8-d9cee0eac352-kube-api-access-kl8tv\") pod \"keystone-db-create-dlmwk\" (UID: \"2dc6911f-8892-4a6a-99c8-d9cee0eac352\") " pod="openstack/keystone-db-create-dlmwk" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.054628 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pltlc\" (UniqueName: \"kubernetes.io/projected/cf5794fa-0b86-4d4a-a9ba-500a4834e315-kube-api-access-pltlc\") pod \"keystone-c48d-account-create-update-6plbm\" (UID: \"cf5794fa-0b86-4d4a-a9ba-500a4834e315\") " pod="openstack/keystone-c48d-account-create-update-6plbm" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.054784 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc6911f-8892-4a6a-99c8-d9cee0eac352-operator-scripts\") pod \"keystone-db-create-dlmwk\" (UID: \"2dc6911f-8892-4a6a-99c8-d9cee0eac352\") " pod="openstack/keystone-db-create-dlmwk" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.054991 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf5794fa-0b86-4d4a-a9ba-500a4834e315-operator-scripts\") pod \"keystone-c48d-account-create-update-6plbm\" (UID: \"cf5794fa-0b86-4d4a-a9ba-500a4834e315\") " pod="openstack/keystone-c48d-account-create-update-6plbm" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.060731 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc6911f-8892-4a6a-99c8-d9cee0eac352-operator-scripts\") pod \"keystone-db-create-dlmwk\" (UID: \"2dc6911f-8892-4a6a-99c8-d9cee0eac352\") " pod="openstack/keystone-db-create-dlmwk" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.062462 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-nl92r" podStartSLOduration=-9223372003.792393 podStartE2EDuration="33.062382767s" podCreationTimestamp="2026-01-28 15:37:13 +0000 UTC" firstStartedPulling="2026-01-28 15:37:14.681998603 +0000 UTC m=+1186.577836925" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:37:46.053055602 +0000 UTC m=+1217.948893924" watchObservedRunningTime="2026-01-28 15:37:46.062382767 +0000 UTC m=+1217.958221089" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.064363 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf5794fa-0b86-4d4a-a9ba-500a4834e315-operator-scripts\") pod \"keystone-c48d-account-create-update-6plbm\" (UID: \"cf5794fa-0b86-4d4a-a9ba-500a4834e315\") " pod="openstack/keystone-c48d-account-create-update-6plbm" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.100251 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl8tv\" (UniqueName: \"kubernetes.io/projected/2dc6911f-8892-4a6a-99c8-d9cee0eac352-kube-api-access-kl8tv\") pod \"keystone-db-create-dlmwk\" (UID: \"2dc6911f-8892-4a6a-99c8-d9cee0eac352\") " pod="openstack/keystone-db-create-dlmwk" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.105275 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pltlc\" (UniqueName: \"kubernetes.io/projected/cf5794fa-0b86-4d4a-a9ba-500a4834e315-kube-api-access-pltlc\") pod \"keystone-c48d-account-create-update-6plbm\" (UID: \"cf5794fa-0b86-4d4a-a9ba-500a4834e315\") " pod="openstack/keystone-c48d-account-create-update-6plbm" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.141499 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-4jm4z" podStartSLOduration=2.444085707 podStartE2EDuration="7.141471697s" podCreationTimestamp="2026-01-28 15:37:39 +0000 UTC" firstStartedPulling="2026-01-28 15:37:40.053460794 +0000 UTC m=+1211.949299116" lastFinishedPulling="2026-01-28 15:37:44.750846764 +0000 UTC m=+1216.646685106" observedRunningTime="2026-01-28 15:37:46.100509876 +0000 UTC m=+1217.996348218" watchObservedRunningTime="2026-01-28 15:37:46.141471697 +0000 UTC m=+1218.037310019" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.154214 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c48d-account-create-update-6plbm" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.159938 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-c2xpq" podStartSLOduration=16.470254875 podStartE2EDuration="45.159917587s" podCreationTimestamp="2026-01-28 15:37:01 +0000 UTC" firstStartedPulling="2026-01-28 15:37:12.698231056 +0000 UTC m=+1184.594069368" lastFinishedPulling="2026-01-28 15:37:41.387893758 +0000 UTC m=+1213.283732080" observedRunningTime="2026-01-28 15:37:46.127662282 +0000 UTC m=+1218.023500604" watchObservedRunningTime="2026-01-28 15:37:46.159917587 +0000 UTC m=+1218.055755909" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.177216 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-pcq2c"] Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.178163 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pcq2c" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.180467 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=13.016451318 podStartE2EDuration="45.180450464s" podCreationTimestamp="2026-01-28 15:37:01 +0000 UTC" firstStartedPulling="2026-01-28 15:37:12.586169088 +0000 UTC m=+1184.482007410" lastFinishedPulling="2026-01-28 15:37:44.750168224 +0000 UTC m=+1216.646006556" observedRunningTime="2026-01-28 15:37:46.177577623 +0000 UTC m=+1218.073415945" watchObservedRunningTime="2026-01-28 15:37:46.180450464 +0000 UTC m=+1218.076288806" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.182406 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dlmwk" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.249861 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-pcq2c"] Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.278371 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7d19-account-create-update-fvmz8"] Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.280802 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d19-account-create-update-fvmz8" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.286752 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.308895 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=11.112479893 podStartE2EDuration="43.308854096s" podCreationTimestamp="2026-01-28 15:37:03 +0000 UTC" firstStartedPulling="2026-01-28 15:37:13.406020891 +0000 UTC m=+1185.301859223" lastFinishedPulling="2026-01-28 15:37:45.602395104 +0000 UTC m=+1217.498233426" observedRunningTime="2026-01-28 15:37:46.214805455 +0000 UTC m=+1218.110643817" watchObservedRunningTime="2026-01-28 15:37:46.308854096 +0000 UTC m=+1218.204692438" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.740129 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.764133 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d19-account-create-update-fvmz8"] Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.790004 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6e17493-c4b5-417e-b5b2-42a1a245447e-etc-swift\") pod \"swift-storage-0\" (UID: \"a6e17493-c4b5-417e-b5b2-42a1a245447e\") " pod="openstack/swift-storage-0" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.790091 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr4t9\" (UniqueName: \"kubernetes.io/projected/0c6a3a96-8a57-4097-a921-58250c387ddc-kube-api-access-lr4t9\") pod \"placement-db-create-pcq2c\" (UID: \"0c6a3a96-8a57-4097-a921-58250c387ddc\") " pod="openstack/placement-db-create-pcq2c" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.790133 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c6a3a96-8a57-4097-a921-58250c387ddc-operator-scripts\") pod \"placement-db-create-pcq2c\" (UID: \"0c6a3a96-8a57-4097-a921-58250c387ddc\") " pod="openstack/placement-db-create-pcq2c" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.790156 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f464j\" (UniqueName: \"kubernetes.io/projected/b566ceb5-8640-40f3-bbf1-c0e11f82602c-kube-api-access-f464j\") pod \"placement-7d19-account-create-update-fvmz8\" (UID: \"b566ceb5-8640-40f3-bbf1-c0e11f82602c\") " pod="openstack/placement-7d19-account-create-update-fvmz8" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.790222 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b566ceb5-8640-40f3-bbf1-c0e11f82602c-operator-scripts\") pod \"placement-7d19-account-create-update-fvmz8\" (UID: \"b566ceb5-8640-40f3-bbf1-c0e11f82602c\") " pod="openstack/placement-7d19-account-create-update-fvmz8" Jan 28 15:37:46 crc kubenswrapper[4871]: E0128 15:37:46.790397 4871 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 15:37:46 crc kubenswrapper[4871]: E0128 15:37:46.790411 4871 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 15:37:46 crc kubenswrapper[4871]: E0128 15:37:46.790447 4871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6e17493-c4b5-417e-b5b2-42a1a245447e-etc-swift podName:a6e17493-c4b5-417e-b5b2-42a1a245447e nodeName:}" failed. No retries permitted until 2026-01-28 15:37:54.790432678 +0000 UTC m=+1226.686271000 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a6e17493-c4b5-417e-b5b2-42a1a245447e-etc-swift") pod "swift-storage-0" (UID: "a6e17493-c4b5-417e-b5b2-42a1a245447e") : configmap "swift-ring-files" not found Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.791059 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1112-account-create-update-q2w6s"] Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.792119 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1112-account-create-update-q2w6s" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.799504 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.822157 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-4s628"] Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.825607 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4s628" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.834153 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4s628"] Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.839917 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1112-account-create-update-q2w6s"] Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.897473 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr4t9\" (UniqueName: \"kubernetes.io/projected/0c6a3a96-8a57-4097-a921-58250c387ddc-kube-api-access-lr4t9\") pod \"placement-db-create-pcq2c\" (UID: \"0c6a3a96-8a57-4097-a921-58250c387ddc\") " pod="openstack/placement-db-create-pcq2c" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.897521 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c6a3a96-8a57-4097-a921-58250c387ddc-operator-scripts\") pod \"placement-db-create-pcq2c\" (UID: \"0c6a3a96-8a57-4097-a921-58250c387ddc\") " pod="openstack/placement-db-create-pcq2c" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.897549 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f464j\" (UniqueName: \"kubernetes.io/projected/b566ceb5-8640-40f3-bbf1-c0e11f82602c-kube-api-access-f464j\") pod \"placement-7d19-account-create-update-fvmz8\" (UID: \"b566ceb5-8640-40f3-bbf1-c0e11f82602c\") " pod="openstack/placement-7d19-account-create-update-fvmz8" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.897602 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b566ceb5-8640-40f3-bbf1-c0e11f82602c-operator-scripts\") pod \"placement-7d19-account-create-update-fvmz8\" (UID: \"b566ceb5-8640-40f3-bbf1-c0e11f82602c\") " pod="openstack/placement-7d19-account-create-update-fvmz8" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.898341 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b566ceb5-8640-40f3-bbf1-c0e11f82602c-operator-scripts\") pod \"placement-7d19-account-create-update-fvmz8\" (UID: \"b566ceb5-8640-40f3-bbf1-c0e11f82602c\") " pod="openstack/placement-7d19-account-create-update-fvmz8" Jan 28 15:37:46 crc kubenswrapper[4871]: I0128 15:37:46.899716 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c6a3a96-8a57-4097-a921-58250c387ddc-operator-scripts\") pod \"placement-db-create-pcq2c\" (UID: \"0c6a3a96-8a57-4097-a921-58250c387ddc\") " pod="openstack/placement-db-create-pcq2c" Jan 28 15:37:47 crc kubenswrapper[4871]: I0128 15:37:46.970551 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f464j\" (UniqueName: \"kubernetes.io/projected/b566ceb5-8640-40f3-bbf1-c0e11f82602c-kube-api-access-f464j\") pod \"placement-7d19-account-create-update-fvmz8\" (UID: \"b566ceb5-8640-40f3-bbf1-c0e11f82602c\") " pod="openstack/placement-7d19-account-create-update-fvmz8" Jan 28 15:37:47 crc kubenswrapper[4871]: I0128 15:37:46.971489 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr4t9\" (UniqueName: \"kubernetes.io/projected/0c6a3a96-8a57-4097-a921-58250c387ddc-kube-api-access-lr4t9\") pod \"placement-db-create-pcq2c\" (UID: \"0c6a3a96-8a57-4097-a921-58250c387ddc\") " pod="openstack/placement-db-create-pcq2c" Jan 28 15:37:47 crc kubenswrapper[4871]: I0128 15:37:47.003546 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62e4563a-ee68-409e-b5c4-f4c53657c71d-operator-scripts\") pod \"glance-1112-account-create-update-q2w6s\" (UID: \"62e4563a-ee68-409e-b5c4-f4c53657c71d\") " pod="openstack/glance-1112-account-create-update-q2w6s" Jan 28 15:37:47 crc kubenswrapper[4871]: I0128 15:37:47.003734 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cwqn\" (UniqueName: \"kubernetes.io/projected/62e4563a-ee68-409e-b5c4-f4c53657c71d-kube-api-access-5cwqn\") pod \"glance-1112-account-create-update-q2w6s\" (UID: \"62e4563a-ee68-409e-b5c4-f4c53657c71d\") " pod="openstack/glance-1112-account-create-update-q2w6s" Jan 28 15:37:47 crc kubenswrapper[4871]: I0128 15:37:47.003876 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9-operator-scripts\") pod \"glance-db-create-4s628\" (UID: \"7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9\") " pod="openstack/glance-db-create-4s628" Jan 28 15:37:47 crc kubenswrapper[4871]: I0128 15:37:47.003924 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhg7l\" (UniqueName: \"kubernetes.io/projected/7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9-kube-api-access-nhg7l\") pod \"glance-db-create-4s628\" (UID: \"7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9\") " pod="openstack/glance-db-create-4s628" Jan 28 15:37:47 crc kubenswrapper[4871]: I0128 15:37:47.014432 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:47 crc kubenswrapper[4871]: I0128 15:37:47.104108 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pcq2c" Jan 28 15:37:47 crc kubenswrapper[4871]: I0128 15:37:47.107875 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhg7l\" (UniqueName: \"kubernetes.io/projected/7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9-kube-api-access-nhg7l\") pod \"glance-db-create-4s628\" (UID: \"7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9\") " pod="openstack/glance-db-create-4s628" Jan 28 15:37:47 crc kubenswrapper[4871]: I0128 15:37:47.108115 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62e4563a-ee68-409e-b5c4-f4c53657c71d-operator-scripts\") pod \"glance-1112-account-create-update-q2w6s\" (UID: \"62e4563a-ee68-409e-b5c4-f4c53657c71d\") " pod="openstack/glance-1112-account-create-update-q2w6s" Jan 28 15:37:47 crc kubenswrapper[4871]: I0128 15:37:47.108280 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cwqn\" (UniqueName: \"kubernetes.io/projected/62e4563a-ee68-409e-b5c4-f4c53657c71d-kube-api-access-5cwqn\") pod \"glance-1112-account-create-update-q2w6s\" (UID: \"62e4563a-ee68-409e-b5c4-f4c53657c71d\") " pod="openstack/glance-1112-account-create-update-q2w6s" Jan 28 15:37:47 crc kubenswrapper[4871]: I0128 15:37:47.108539 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9-operator-scripts\") pod \"glance-db-create-4s628\" (UID: \"7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9\") " pod="openstack/glance-db-create-4s628" Jan 28 15:37:47 crc kubenswrapper[4871]: I0128 15:37:47.110346 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9-operator-scripts\") pod \"glance-db-create-4s628\" (UID: \"7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9\") " pod="openstack/glance-db-create-4s628" Jan 28 15:37:47 crc kubenswrapper[4871]: I0128 15:37:47.110366 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62e4563a-ee68-409e-b5c4-f4c53657c71d-operator-scripts\") pod \"glance-1112-account-create-update-q2w6s\" (UID: \"62e4563a-ee68-409e-b5c4-f4c53657c71d\") " pod="openstack/glance-1112-account-create-update-q2w6s" Jan 28 15:37:47 crc kubenswrapper[4871]: I0128 15:37:47.229825 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d19-account-create-update-fvmz8" Jan 28 15:37:47 crc kubenswrapper[4871]: I0128 15:37:47.329429 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhg7l\" (UniqueName: \"kubernetes.io/projected/7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9-kube-api-access-nhg7l\") pod \"glance-db-create-4s628\" (UID: \"7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9\") " pod="openstack/glance-db-create-4s628" Jan 28 15:37:47 crc kubenswrapper[4871]: I0128 15:37:47.329711 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cwqn\" (UniqueName: \"kubernetes.io/projected/62e4563a-ee68-409e-b5c4-f4c53657c71d-kube-api-access-5cwqn\") pod \"glance-1112-account-create-update-q2w6s\" (UID: \"62e4563a-ee68-409e-b5c4-f4c53657c71d\") " pod="openstack/glance-1112-account-create-update-q2w6s" Jan 28 15:37:47 crc kubenswrapper[4871]: I0128 15:37:47.514956 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1112-account-create-update-q2w6s" Jan 28 15:37:47 crc kubenswrapper[4871]: I0128 15:37:47.549297 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4s628" Jan 28 15:37:47 crc kubenswrapper[4871]: I0128 15:37:47.826171 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-dlmwk"] Jan 28 15:37:47 crc kubenswrapper[4871]: I0128 15:37:47.846892 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c48d-account-create-update-6plbm"] Jan 28 15:37:47 crc kubenswrapper[4871]: W0128 15:37:47.856644 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dc6911f_8892_4a6a_99c8_d9cee0eac352.slice/crio-549501cd91dc92852805228dbeb8bfe65051273fa7248d676e9e5a28183028e5 WatchSource:0}: Error finding container 549501cd91dc92852805228dbeb8bfe65051273fa7248d676e9e5a28183028e5: Status 404 returned error can't find the container with id 549501cd91dc92852805228dbeb8bfe65051273fa7248d676e9e5a28183028e5 Jan 28 15:37:47 crc kubenswrapper[4871]: W0128 15:37:47.871534 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf5794fa_0b86_4d4a_a9ba_500a4834e315.slice/crio-9cae662ac4e4d026ddf80d358cbec17e367497e525159bc0d0d25437a86b0529 WatchSource:0}: Error finding container 9cae662ac4e4d026ddf80d358cbec17e367497e525159bc0d0d25437a86b0529: Status 404 returned error can't find the container with id 9cae662ac4e4d026ddf80d358cbec17e367497e525159bc0d0d25437a86b0529 Jan 28 15:37:47 crc kubenswrapper[4871]: I0128 15:37:47.946823 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-ncpsd" Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.005870 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-98wd7"] Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.007646 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" podUID="3a3637bf-9cf3-47b6-ae0e-e77c0476e060" containerName="dnsmasq-dns" containerID="cri-o://17c9c399373e0fdb84078263272743a958300787b1fa9a895c82bd72d598bfc1" gracePeriod=10 Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.067140 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c48d-account-create-update-6plbm" event={"ID":"cf5794fa-0b86-4d4a-a9ba-500a4834e315","Type":"ContainerStarted","Data":"9cae662ac4e4d026ddf80d358cbec17e367497e525159bc0d0d25437a86b0529"} Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.072076 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"24325972-e640-4b7b-b5c9-215dd8cd0fea","Type":"ContainerStarted","Data":"ce4a36198f656cf90712cab3145e0d0c6191bd8df035767bd412072f0c72c80a"} Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.073249 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.075729 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dlmwk" event={"ID":"2dc6911f-8892-4a6a-99c8-d9cee0eac352","Type":"ContainerStarted","Data":"549501cd91dc92852805228dbeb8bfe65051273fa7248d676e9e5a28183028e5"} Jan 28 15:37:48 crc kubenswrapper[4871]: W0128 15:37:48.090562 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c6a3a96_8a57_4097_a921_58250c387ddc.slice/crio-a4988158b0cdac4dcf7bbb96d44ffb42f6d4fe4c8c9a688d329ba76c6a880e04 WatchSource:0}: Error finding container a4988158b0cdac4dcf7bbb96d44ffb42f6d4fe4c8c9a688d329ba76c6a880e04: Status 404 returned error can't find the container with id a4988158b0cdac4dcf7bbb96d44ffb42f6d4fe4c8c9a688d329ba76c6a880e04 Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.099008 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-pcq2c"] Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.101701 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=16.606362788 podStartE2EDuration="51.10134379s" podCreationTimestamp="2026-01-28 15:36:57 +0000 UTC" firstStartedPulling="2026-01-28 15:37:12.510896749 +0000 UTC m=+1184.406735071" lastFinishedPulling="2026-01-28 15:37:47.005877751 +0000 UTC m=+1218.901716073" observedRunningTime="2026-01-28 15:37:48.085929144 +0000 UTC m=+1219.981767466" watchObservedRunningTime="2026-01-28 15:37:48.10134379 +0000 UTC m=+1219.997182112" Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.250243 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d19-account-create-update-fvmz8"] Jan 28 15:37:48 crc kubenswrapper[4871]: W0128 15:37:48.311010 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb566ceb5_8640_40f3_bbf1_c0e11f82602c.slice/crio-1d044ca375179bae897bc2a5d8bd2b7c1dc1aca24353b01350e53d903f9bb1fa WatchSource:0}: Error finding container 1d044ca375179bae897bc2a5d8bd2b7c1dc1aca24353b01350e53d903f9bb1fa: Status 404 returned error can't find the container with id 1d044ca375179bae897bc2a5d8bd2b7c1dc1aca24353b01350e53d903f9bb1fa Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.332307 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1112-account-create-update-q2w6s"] Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.346307 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.346344 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.422921 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4s628"] Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.430351 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:48 crc kubenswrapper[4871]: W0128 15:37:48.466766 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f6fbeb5_1fc1_4012_9e72_8e74f72f7cd9.slice/crio-db9faba252119193a8df7aa6deb3db7c85ce73011e0c03da31cc06dac5fa3bde WatchSource:0}: Error finding container db9faba252119193a8df7aa6deb3db7c85ce73011e0c03da31cc06dac5fa3bde: Status 404 returned error can't find the container with id db9faba252119193a8df7aa6deb3db7c85ce73011e0c03da31cc06dac5fa3bde Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.593332 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.748924 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-config\") pod \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\" (UID: \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\") " Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.749416 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tlzd\" (UniqueName: \"kubernetes.io/projected/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-kube-api-access-8tlzd\") pod \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\" (UID: \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\") " Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.749472 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-dns-svc\") pod \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\" (UID: \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\") " Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.749517 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-ovsdbserver-sb\") pod \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\" (UID: \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\") " Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.749542 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-ovsdbserver-nb\") pod \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\" (UID: \"3a3637bf-9cf3-47b6-ae0e-e77c0476e060\") " Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.763883 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-kube-api-access-8tlzd" (OuterVolumeSpecName: "kube-api-access-8tlzd") pod "3a3637bf-9cf3-47b6-ae0e-e77c0476e060" (UID: "3a3637bf-9cf3-47b6-ae0e-e77c0476e060"). InnerVolumeSpecName "kube-api-access-8tlzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.800773 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-config" (OuterVolumeSpecName: "config") pod "3a3637bf-9cf3-47b6-ae0e-e77c0476e060" (UID: "3a3637bf-9cf3-47b6-ae0e-e77c0476e060"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.821827 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3a3637bf-9cf3-47b6-ae0e-e77c0476e060" (UID: "3a3637bf-9cf3-47b6-ae0e-e77c0476e060"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.821947 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3a3637bf-9cf3-47b6-ae0e-e77c0476e060" (UID: "3a3637bf-9cf3-47b6-ae0e-e77c0476e060"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.824249 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3a3637bf-9cf3-47b6-ae0e-e77c0476e060" (UID: "3a3637bf-9cf3-47b6-ae0e-e77c0476e060"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.852000 4871 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.853033 4871 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.853120 4871 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.853177 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:48 crc kubenswrapper[4871]: I0128 15:37:48.853242 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tlzd\" (UniqueName: \"kubernetes.io/projected/3a3637bf-9cf3-47b6-ae0e-e77c0476e060-kube-api-access-8tlzd\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.085291 4871 generic.go:334] "Generic (PLEG): container finished" podID="62e4563a-ee68-409e-b5c4-f4c53657c71d" containerID="f53e6eb592f33c9aa514fc180121e16c2542e27dccdd5dd9bd717bc40aedb24a" exitCode=0 Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.085364 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1112-account-create-update-q2w6s" event={"ID":"62e4563a-ee68-409e-b5c4-f4c53657c71d","Type":"ContainerDied","Data":"f53e6eb592f33c9aa514fc180121e16c2542e27dccdd5dd9bd717bc40aedb24a"} Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.085434 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1112-account-create-update-q2w6s" event={"ID":"62e4563a-ee68-409e-b5c4-f4c53657c71d","Type":"ContainerStarted","Data":"fcde81d35be8cc927a09055cf0a6eb64b4d6f2577514ebb96c3f33a230284ea1"} Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.087046 4871 generic.go:334] "Generic (PLEG): container finished" podID="0c6a3a96-8a57-4097-a921-58250c387ddc" containerID="5bb6ef37566f2fddaa54c3ebf69088dc134fea5a7a9af37d0b670c9bbd5dcaa6" exitCode=0 Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.087111 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pcq2c" event={"ID":"0c6a3a96-8a57-4097-a921-58250c387ddc","Type":"ContainerDied","Data":"5bb6ef37566f2fddaa54c3ebf69088dc134fea5a7a9af37d0b670c9bbd5dcaa6"} Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.087139 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pcq2c" event={"ID":"0c6a3a96-8a57-4097-a921-58250c387ddc","Type":"ContainerStarted","Data":"a4988158b0cdac4dcf7bbb96d44ffb42f6d4fe4c8c9a688d329ba76c6a880e04"} Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.088976 4871 generic.go:334] "Generic (PLEG): container finished" podID="b566ceb5-8640-40f3-bbf1-c0e11f82602c" containerID="b0b6e4fd6be3235029f7796b4bb5c5192c1bfb3b8464abd694389ba1e8565b5e" exitCode=0 Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.089030 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d19-account-create-update-fvmz8" event={"ID":"b566ceb5-8640-40f3-bbf1-c0e11f82602c","Type":"ContainerDied","Data":"b0b6e4fd6be3235029f7796b4bb5c5192c1bfb3b8464abd694389ba1e8565b5e"} Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.089049 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d19-account-create-update-fvmz8" event={"ID":"b566ceb5-8640-40f3-bbf1-c0e11f82602c","Type":"ContainerStarted","Data":"1d044ca375179bae897bc2a5d8bd2b7c1dc1aca24353b01350e53d903f9bb1fa"} Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.090470 4871 generic.go:334] "Generic (PLEG): container finished" podID="cf5794fa-0b86-4d4a-a9ba-500a4834e315" containerID="576e70132aa7e3928671d95b5af42a5f7d24b87b5f6f7945b4f7815073df39fc" exitCode=0 Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.090521 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c48d-account-create-update-6plbm" event={"ID":"cf5794fa-0b86-4d4a-a9ba-500a4834e315","Type":"ContainerDied","Data":"576e70132aa7e3928671d95b5af42a5f7d24b87b5f6f7945b4f7815073df39fc"} Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.092273 4871 generic.go:334] "Generic (PLEG): container finished" podID="3a3637bf-9cf3-47b6-ae0e-e77c0476e060" containerID="17c9c399373e0fdb84078263272743a958300787b1fa9a895c82bd72d598bfc1" exitCode=0 Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.092336 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" event={"ID":"3a3637bf-9cf3-47b6-ae0e-e77c0476e060","Type":"ContainerDied","Data":"17c9c399373e0fdb84078263272743a958300787b1fa9a895c82bd72d598bfc1"} Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.092423 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" event={"ID":"3a3637bf-9cf3-47b6-ae0e-e77c0476e060","Type":"ContainerDied","Data":"c5a183e29d0025f7833cf915276e966620f20d39357620e5086fa719b4c6d925"} Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.092450 4871 scope.go:117] "RemoveContainer" containerID="17c9c399373e0fdb84078263272743a958300787b1fa9a895c82bd72d598bfc1" Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.092643 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-98wd7" Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.103225 4871 generic.go:334] "Generic (PLEG): container finished" podID="7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9" containerID="c80e3cb9a24e4376e616c5771654172e0d2f5c132360823fe57fbf0958bcfd1f" exitCode=0 Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.103305 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4s628" event={"ID":"7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9","Type":"ContainerDied","Data":"c80e3cb9a24e4376e616c5771654172e0d2f5c132360823fe57fbf0958bcfd1f"} Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.103337 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4s628" event={"ID":"7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9","Type":"ContainerStarted","Data":"db9faba252119193a8df7aa6deb3db7c85ce73011e0c03da31cc06dac5fa3bde"} Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.108215 4871 generic.go:334] "Generic (PLEG): container finished" podID="2dc6911f-8892-4a6a-99c8-d9cee0eac352" containerID="cefd8b9f3cf39496dab49884be2940c9c5d05195abb50d8ebe50261d75cc6376" exitCode=0 Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.109053 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dlmwk" event={"ID":"2dc6911f-8892-4a6a-99c8-d9cee0eac352","Type":"ContainerDied","Data":"cefd8b9f3cf39496dab49884be2940c9c5d05195abb50d8ebe50261d75cc6376"} Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.126503 4871 scope.go:117] "RemoveContainer" containerID="689be453253dfc8ecb69ee356f4acdcee93625122871952a90d3e930df05319e" Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.153700 4871 scope.go:117] "RemoveContainer" containerID="17c9c399373e0fdb84078263272743a958300787b1fa9a895c82bd72d598bfc1" Jan 28 15:37:49 crc kubenswrapper[4871]: E0128 15:37:49.154387 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17c9c399373e0fdb84078263272743a958300787b1fa9a895c82bd72d598bfc1\": container with ID starting with 17c9c399373e0fdb84078263272743a958300787b1fa9a895c82bd72d598bfc1 not found: ID does not exist" containerID="17c9c399373e0fdb84078263272743a958300787b1fa9a895c82bd72d598bfc1" Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.154424 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17c9c399373e0fdb84078263272743a958300787b1fa9a895c82bd72d598bfc1"} err="failed to get container status \"17c9c399373e0fdb84078263272743a958300787b1fa9a895c82bd72d598bfc1\": rpc error: code = NotFound desc = could not find container \"17c9c399373e0fdb84078263272743a958300787b1fa9a895c82bd72d598bfc1\": container with ID starting with 17c9c399373e0fdb84078263272743a958300787b1fa9a895c82bd72d598bfc1 not found: ID does not exist" Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.154447 4871 scope.go:117] "RemoveContainer" containerID="689be453253dfc8ecb69ee356f4acdcee93625122871952a90d3e930df05319e" Jan 28 15:37:49 crc kubenswrapper[4871]: E0128 15:37:49.154970 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"689be453253dfc8ecb69ee356f4acdcee93625122871952a90d3e930df05319e\": container with ID starting with 689be453253dfc8ecb69ee356f4acdcee93625122871952a90d3e930df05319e not found: ID does not exist" containerID="689be453253dfc8ecb69ee356f4acdcee93625122871952a90d3e930df05319e" Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.154988 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"689be453253dfc8ecb69ee356f4acdcee93625122871952a90d3e930df05319e"} err="failed to get container status \"689be453253dfc8ecb69ee356f4acdcee93625122871952a90d3e930df05319e\": rpc error: code = NotFound desc = could not find container \"689be453253dfc8ecb69ee356f4acdcee93625122871952a90d3e930df05319e\": container with ID starting with 689be453253dfc8ecb69ee356f4acdcee93625122871952a90d3e930df05319e not found: ID does not exist" Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.199297 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-98wd7"] Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.209269 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-98wd7"] Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.929293 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:49 crc kubenswrapper[4871]: I0128 15:37:49.970267 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:50 crc kubenswrapper[4871]: I0128 15:37:50.257651 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:50 crc kubenswrapper[4871]: I0128 15:37:50.929552 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a3637bf-9cf3-47b6-ae0e-e77c0476e060" path="/var/lib/kubelet/pods/3a3637bf-9cf3-47b6-ae0e-e77c0476e060/volumes" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.258236 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4s628" event={"ID":"7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9","Type":"ContainerDied","Data":"db9faba252119193a8df7aa6deb3db7c85ce73011e0c03da31cc06dac5fa3bde"} Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.258284 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db9faba252119193a8df7aa6deb3db7c85ce73011e0c03da31cc06dac5fa3bde" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.295819 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4s628" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.446060 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9-operator-scripts\") pod \"7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9\" (UID: \"7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9\") " Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.446359 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhg7l\" (UniqueName: \"kubernetes.io/projected/7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9-kube-api-access-nhg7l\") pod \"7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9\" (UID: \"7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9\") " Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.449334 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9" (UID: "7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.456923 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9-kube-api-access-nhg7l" (OuterVolumeSpecName: "kube-api-access-nhg7l") pod "7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9" (UID: "7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9"). InnerVolumeSpecName "kube-api-access-nhg7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.513574 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c48d-account-create-update-6plbm" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.517777 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d19-account-create-update-fvmz8" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.523325 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dlmwk" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.535395 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pcq2c" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.548461 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1112-account-create-update-q2w6s" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.551121 4871 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.551180 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhg7l\" (UniqueName: \"kubernetes.io/projected/7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9-kube-api-access-nhg7l\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.652062 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b566ceb5-8640-40f3-bbf1-c0e11f82602c-operator-scripts\") pod \"b566ceb5-8640-40f3-bbf1-c0e11f82602c\" (UID: \"b566ceb5-8640-40f3-bbf1-c0e11f82602c\") " Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.652347 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl8tv\" (UniqueName: \"kubernetes.io/projected/2dc6911f-8892-4a6a-99c8-d9cee0eac352-kube-api-access-kl8tv\") pod \"2dc6911f-8892-4a6a-99c8-d9cee0eac352\" (UID: \"2dc6911f-8892-4a6a-99c8-d9cee0eac352\") " Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.652377 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pltlc\" (UniqueName: \"kubernetes.io/projected/cf5794fa-0b86-4d4a-a9ba-500a4834e315-kube-api-access-pltlc\") pod \"cf5794fa-0b86-4d4a-a9ba-500a4834e315\" (UID: \"cf5794fa-0b86-4d4a-a9ba-500a4834e315\") " Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.652508 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf5794fa-0b86-4d4a-a9ba-500a4834e315-operator-scripts\") pod \"cf5794fa-0b86-4d4a-a9ba-500a4834e315\" (UID: \"cf5794fa-0b86-4d4a-a9ba-500a4834e315\") " Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.652552 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62e4563a-ee68-409e-b5c4-f4c53657c71d-operator-scripts\") pod \"62e4563a-ee68-409e-b5c4-f4c53657c71d\" (UID: \"62e4563a-ee68-409e-b5c4-f4c53657c71d\") " Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.652597 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c6a3a96-8a57-4097-a921-58250c387ddc-operator-scripts\") pod \"0c6a3a96-8a57-4097-a921-58250c387ddc\" (UID: \"0c6a3a96-8a57-4097-a921-58250c387ddc\") " Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.652621 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr4t9\" (UniqueName: \"kubernetes.io/projected/0c6a3a96-8a57-4097-a921-58250c387ddc-kube-api-access-lr4t9\") pod \"0c6a3a96-8a57-4097-a921-58250c387ddc\" (UID: \"0c6a3a96-8a57-4097-a921-58250c387ddc\") " Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.652654 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f464j\" (UniqueName: \"kubernetes.io/projected/b566ceb5-8640-40f3-bbf1-c0e11f82602c-kube-api-access-f464j\") pod \"b566ceb5-8640-40f3-bbf1-c0e11f82602c\" (UID: \"b566ceb5-8640-40f3-bbf1-c0e11f82602c\") " Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.652683 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cwqn\" (UniqueName: \"kubernetes.io/projected/62e4563a-ee68-409e-b5c4-f4c53657c71d-kube-api-access-5cwqn\") pod \"62e4563a-ee68-409e-b5c4-f4c53657c71d\" (UID: \"62e4563a-ee68-409e-b5c4-f4c53657c71d\") " Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.652705 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc6911f-8892-4a6a-99c8-d9cee0eac352-operator-scripts\") pod \"2dc6911f-8892-4a6a-99c8-d9cee0eac352\" (UID: \"2dc6911f-8892-4a6a-99c8-d9cee0eac352\") " Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.653056 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf5794fa-0b86-4d4a-a9ba-500a4834e315-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf5794fa-0b86-4d4a-a9ba-500a4834e315" (UID: "cf5794fa-0b86-4d4a-a9ba-500a4834e315"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.653655 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dc6911f-8892-4a6a-99c8-d9cee0eac352-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2dc6911f-8892-4a6a-99c8-d9cee0eac352" (UID: "2dc6911f-8892-4a6a-99c8-d9cee0eac352"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.653703 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b566ceb5-8640-40f3-bbf1-c0e11f82602c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b566ceb5-8640-40f3-bbf1-c0e11f82602c" (UID: "b566ceb5-8640-40f3-bbf1-c0e11f82602c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.653723 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62e4563a-ee68-409e-b5c4-f4c53657c71d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "62e4563a-ee68-409e-b5c4-f4c53657c71d" (UID: "62e4563a-ee68-409e-b5c4-f4c53657c71d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.654675 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c6a3a96-8a57-4097-a921-58250c387ddc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c6a3a96-8a57-4097-a921-58250c387ddc" (UID: "0c6a3a96-8a57-4097-a921-58250c387ddc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.656743 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b566ceb5-8640-40f3-bbf1-c0e11f82602c-kube-api-access-f464j" (OuterVolumeSpecName: "kube-api-access-f464j") pod "b566ceb5-8640-40f3-bbf1-c0e11f82602c" (UID: "b566ceb5-8640-40f3-bbf1-c0e11f82602c"). InnerVolumeSpecName "kube-api-access-f464j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.657158 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dc6911f-8892-4a6a-99c8-d9cee0eac352-kube-api-access-kl8tv" (OuterVolumeSpecName: "kube-api-access-kl8tv") pod "2dc6911f-8892-4a6a-99c8-d9cee0eac352" (UID: "2dc6911f-8892-4a6a-99c8-d9cee0eac352"). InnerVolumeSpecName "kube-api-access-kl8tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.657193 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf5794fa-0b86-4d4a-a9ba-500a4834e315-kube-api-access-pltlc" (OuterVolumeSpecName: "kube-api-access-pltlc") pod "cf5794fa-0b86-4d4a-a9ba-500a4834e315" (UID: "cf5794fa-0b86-4d4a-a9ba-500a4834e315"). InnerVolumeSpecName "kube-api-access-pltlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.658250 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62e4563a-ee68-409e-b5c4-f4c53657c71d-kube-api-access-5cwqn" (OuterVolumeSpecName: "kube-api-access-5cwqn") pod "62e4563a-ee68-409e-b5c4-f4c53657c71d" (UID: "62e4563a-ee68-409e-b5c4-f4c53657c71d"). InnerVolumeSpecName "kube-api-access-5cwqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.658529 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c6a3a96-8a57-4097-a921-58250c387ddc-kube-api-access-lr4t9" (OuterVolumeSpecName: "kube-api-access-lr4t9") pod "0c6a3a96-8a57-4097-a921-58250c387ddc" (UID: "0c6a3a96-8a57-4097-a921-58250c387ddc"). InnerVolumeSpecName "kube-api-access-lr4t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.757216 4871 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf5794fa-0b86-4d4a-a9ba-500a4834e315-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.757262 4871 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62e4563a-ee68-409e-b5c4-f4c53657c71d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.757275 4871 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c6a3a96-8a57-4097-a921-58250c387ddc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.757287 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr4t9\" (UniqueName: \"kubernetes.io/projected/0c6a3a96-8a57-4097-a921-58250c387ddc-kube-api-access-lr4t9\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.757301 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f464j\" (UniqueName: \"kubernetes.io/projected/b566ceb5-8640-40f3-bbf1-c0e11f82602c-kube-api-access-f464j\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.757312 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cwqn\" (UniqueName: \"kubernetes.io/projected/62e4563a-ee68-409e-b5c4-f4c53657c71d-kube-api-access-5cwqn\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.757325 4871 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc6911f-8892-4a6a-99c8-d9cee0eac352-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.757336 4871 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b566ceb5-8640-40f3-bbf1-c0e11f82602c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.757346 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl8tv\" (UniqueName: \"kubernetes.io/projected/2dc6911f-8892-4a6a-99c8-d9cee0eac352-kube-api-access-kl8tv\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:51 crc kubenswrapper[4871]: I0128 15:37:51.757354 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pltlc\" (UniqueName: \"kubernetes.io/projected/cf5794fa-0b86-4d4a-a9ba-500a4834e315-kube-api-access-pltlc\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:52 crc kubenswrapper[4871]: I0128 15:37:52.266168 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dlmwk" event={"ID":"2dc6911f-8892-4a6a-99c8-d9cee0eac352","Type":"ContainerDied","Data":"549501cd91dc92852805228dbeb8bfe65051273fa7248d676e9e5a28183028e5"} Jan 28 15:37:52 crc kubenswrapper[4871]: I0128 15:37:52.266206 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="549501cd91dc92852805228dbeb8bfe65051273fa7248d676e9e5a28183028e5" Jan 28 15:37:52 crc kubenswrapper[4871]: I0128 15:37:52.266260 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dlmwk" Jan 28 15:37:52 crc kubenswrapper[4871]: I0128 15:37:52.276985 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1112-account-create-update-q2w6s" event={"ID":"62e4563a-ee68-409e-b5c4-f4c53657c71d","Type":"ContainerDied","Data":"fcde81d35be8cc927a09055cf0a6eb64b4d6f2577514ebb96c3f33a230284ea1"} Jan 28 15:37:52 crc kubenswrapper[4871]: I0128 15:37:52.277020 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcde81d35be8cc927a09055cf0a6eb64b4d6f2577514ebb96c3f33a230284ea1" Jan 28 15:37:52 crc kubenswrapper[4871]: I0128 15:37:52.277067 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1112-account-create-update-q2w6s" Jan 28 15:37:52 crc kubenswrapper[4871]: I0128 15:37:52.279800 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pcq2c" event={"ID":"0c6a3a96-8a57-4097-a921-58250c387ddc","Type":"ContainerDied","Data":"a4988158b0cdac4dcf7bbb96d44ffb42f6d4fe4c8c9a688d329ba76c6a880e04"} Jan 28 15:37:52 crc kubenswrapper[4871]: I0128 15:37:52.279837 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4988158b0cdac4dcf7bbb96d44ffb42f6d4fe4c8c9a688d329ba76c6a880e04" Jan 28 15:37:52 crc kubenswrapper[4871]: I0128 15:37:52.279908 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pcq2c" Jan 28 15:37:52 crc kubenswrapper[4871]: I0128 15:37:52.282387 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d19-account-create-update-fvmz8" event={"ID":"b566ceb5-8640-40f3-bbf1-c0e11f82602c","Type":"ContainerDied","Data":"1d044ca375179bae897bc2a5d8bd2b7c1dc1aca24353b01350e53d903f9bb1fa"} Jan 28 15:37:52 crc kubenswrapper[4871]: I0128 15:37:52.282427 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d044ca375179bae897bc2a5d8bd2b7c1dc1aca24353b01350e53d903f9bb1fa" Jan 28 15:37:52 crc kubenswrapper[4871]: I0128 15:37:52.282568 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d19-account-create-update-fvmz8" Jan 28 15:37:52 crc kubenswrapper[4871]: I0128 15:37:52.293317 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c48d-account-create-update-6plbm" Jan 28 15:37:52 crc kubenswrapper[4871]: I0128 15:37:52.293491 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c48d-account-create-update-6plbm" event={"ID":"cf5794fa-0b86-4d4a-a9ba-500a4834e315","Type":"ContainerDied","Data":"9cae662ac4e4d026ddf80d358cbec17e367497e525159bc0d0d25437a86b0529"} Jan 28 15:37:52 crc kubenswrapper[4871]: I0128 15:37:52.293530 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cae662ac4e4d026ddf80d358cbec17e367497e525159bc0d0d25437a86b0529" Jan 28 15:37:52 crc kubenswrapper[4871]: I0128 15:37:52.293531 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4s628" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.152080 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-pbkn6"] Jan 28 15:37:53 crc kubenswrapper[4871]: E0128 15:37:53.152408 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62e4563a-ee68-409e-b5c4-f4c53657c71d" containerName="mariadb-account-create-update" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.152421 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="62e4563a-ee68-409e-b5c4-f4c53657c71d" containerName="mariadb-account-create-update" Jan 28 15:37:53 crc kubenswrapper[4871]: E0128 15:37:53.152451 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a3637bf-9cf3-47b6-ae0e-e77c0476e060" containerName="init" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.152457 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a3637bf-9cf3-47b6-ae0e-e77c0476e060" containerName="init" Jan 28 15:37:53 crc kubenswrapper[4871]: E0128 15:37:53.152472 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c6a3a96-8a57-4097-a921-58250c387ddc" containerName="mariadb-database-create" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.152478 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6a3a96-8a57-4097-a921-58250c387ddc" containerName="mariadb-database-create" Jan 28 15:37:53 crc kubenswrapper[4871]: E0128 15:37:53.152486 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a3637bf-9cf3-47b6-ae0e-e77c0476e060" containerName="dnsmasq-dns" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.152492 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a3637bf-9cf3-47b6-ae0e-e77c0476e060" containerName="dnsmasq-dns" Jan 28 15:37:53 crc kubenswrapper[4871]: E0128 15:37:53.152511 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dc6911f-8892-4a6a-99c8-d9cee0eac352" containerName="mariadb-database-create" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.152519 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dc6911f-8892-4a6a-99c8-d9cee0eac352" containerName="mariadb-database-create" Jan 28 15:37:53 crc kubenswrapper[4871]: E0128 15:37:53.152529 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b566ceb5-8640-40f3-bbf1-c0e11f82602c" containerName="mariadb-account-create-update" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.152535 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="b566ceb5-8640-40f3-bbf1-c0e11f82602c" containerName="mariadb-account-create-update" Jan 28 15:37:53 crc kubenswrapper[4871]: E0128 15:37:53.152549 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9" containerName="mariadb-database-create" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.152556 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9" containerName="mariadb-database-create" Jan 28 15:37:53 crc kubenswrapper[4871]: E0128 15:37:53.152567 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf5794fa-0b86-4d4a-a9ba-500a4834e315" containerName="mariadb-account-create-update" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.152574 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf5794fa-0b86-4d4a-a9ba-500a4834e315" containerName="mariadb-account-create-update" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.152786 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c6a3a96-8a57-4097-a921-58250c387ddc" containerName="mariadb-database-create" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.152801 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="62e4563a-ee68-409e-b5c4-f4c53657c71d" containerName="mariadb-account-create-update" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.152815 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a3637bf-9cf3-47b6-ae0e-e77c0476e060" containerName="dnsmasq-dns" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.152824 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9" containerName="mariadb-database-create" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.152835 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dc6911f-8892-4a6a-99c8-d9cee0eac352" containerName="mariadb-database-create" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.152850 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf5794fa-0b86-4d4a-a9ba-500a4834e315" containerName="mariadb-account-create-update" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.152863 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="b566ceb5-8640-40f3-bbf1-c0e11f82602c" containerName="mariadb-account-create-update" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.157037 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pbkn6" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.165783 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.192580 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pbkn6"] Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.279559 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a0683a9-f1aa-4456-9fe3-31cb5eacca57-operator-scripts\") pod \"root-account-create-update-pbkn6\" (UID: \"9a0683a9-f1aa-4456-9fe3-31cb5eacca57\") " pod="openstack/root-account-create-update-pbkn6" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.279796 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8dcp\" (UniqueName: \"kubernetes.io/projected/9a0683a9-f1aa-4456-9fe3-31cb5eacca57-kube-api-access-t8dcp\") pod \"root-account-create-update-pbkn6\" (UID: \"9a0683a9-f1aa-4456-9fe3-31cb5eacca57\") " pod="openstack/root-account-create-update-pbkn6" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.379963 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.380965 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a0683a9-f1aa-4456-9fe3-31cb5eacca57-operator-scripts\") pod \"root-account-create-update-pbkn6\" (UID: \"9a0683a9-f1aa-4456-9fe3-31cb5eacca57\") " pod="openstack/root-account-create-update-pbkn6" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.381229 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8dcp\" (UniqueName: \"kubernetes.io/projected/9a0683a9-f1aa-4456-9fe3-31cb5eacca57-kube-api-access-t8dcp\") pod \"root-account-create-update-pbkn6\" (UID: \"9a0683a9-f1aa-4456-9fe3-31cb5eacca57\") " pod="openstack/root-account-create-update-pbkn6" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.383321 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a0683a9-f1aa-4456-9fe3-31cb5eacca57-operator-scripts\") pod \"root-account-create-update-pbkn6\" (UID: \"9a0683a9-f1aa-4456-9fe3-31cb5eacca57\") " pod="openstack/root-account-create-update-pbkn6" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.404627 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8dcp\" (UniqueName: \"kubernetes.io/projected/9a0683a9-f1aa-4456-9fe3-31cb5eacca57-kube-api-access-t8dcp\") pod \"root-account-create-update-pbkn6\" (UID: \"9a0683a9-f1aa-4456-9fe3-31cb5eacca57\") " pod="openstack/root-account-create-update-pbkn6" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.487060 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pbkn6" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.545211 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.546925 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.554450 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.554704 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.557570 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.557836 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-d55jj" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.565502 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.687011 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb5764cb-cf47-41e0-9759-d0d894878303-scripts\") pod \"ovn-northd-0\" (UID: \"eb5764cb-cf47-41e0-9759-d0d894878303\") " pod="openstack/ovn-northd-0" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.687325 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g75j5\" (UniqueName: \"kubernetes.io/projected/eb5764cb-cf47-41e0-9759-d0d894878303-kube-api-access-g75j5\") pod \"ovn-northd-0\" (UID: \"eb5764cb-cf47-41e0-9759-d0d894878303\") " pod="openstack/ovn-northd-0" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.687402 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb5764cb-cf47-41e0-9759-d0d894878303-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"eb5764cb-cf47-41e0-9759-d0d894878303\") " pod="openstack/ovn-northd-0" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.687435 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb5764cb-cf47-41e0-9759-d0d894878303-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"eb5764cb-cf47-41e0-9759-d0d894878303\") " pod="openstack/ovn-northd-0" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.687487 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb5764cb-cf47-41e0-9759-d0d894878303-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"eb5764cb-cf47-41e0-9759-d0d894878303\") " pod="openstack/ovn-northd-0" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.687519 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb5764cb-cf47-41e0-9759-d0d894878303-config\") pod \"ovn-northd-0\" (UID: \"eb5764cb-cf47-41e0-9759-d0d894878303\") " pod="openstack/ovn-northd-0" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.687544 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eb5764cb-cf47-41e0-9759-d0d894878303-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"eb5764cb-cf47-41e0-9759-d0d894878303\") " pod="openstack/ovn-northd-0" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.788573 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb5764cb-cf47-41e0-9759-d0d894878303-scripts\") pod \"ovn-northd-0\" (UID: \"eb5764cb-cf47-41e0-9759-d0d894878303\") " pod="openstack/ovn-northd-0" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.788644 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g75j5\" (UniqueName: \"kubernetes.io/projected/eb5764cb-cf47-41e0-9759-d0d894878303-kube-api-access-g75j5\") pod \"ovn-northd-0\" (UID: \"eb5764cb-cf47-41e0-9759-d0d894878303\") " pod="openstack/ovn-northd-0" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.788747 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb5764cb-cf47-41e0-9759-d0d894878303-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"eb5764cb-cf47-41e0-9759-d0d894878303\") " pod="openstack/ovn-northd-0" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.788788 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb5764cb-cf47-41e0-9759-d0d894878303-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"eb5764cb-cf47-41e0-9759-d0d894878303\") " pod="openstack/ovn-northd-0" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.788820 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb5764cb-cf47-41e0-9759-d0d894878303-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"eb5764cb-cf47-41e0-9759-d0d894878303\") " pod="openstack/ovn-northd-0" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.788902 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb5764cb-cf47-41e0-9759-d0d894878303-config\") pod \"ovn-northd-0\" (UID: \"eb5764cb-cf47-41e0-9759-d0d894878303\") " pod="openstack/ovn-northd-0" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.788955 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eb5764cb-cf47-41e0-9759-d0d894878303-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"eb5764cb-cf47-41e0-9759-d0d894878303\") " pod="openstack/ovn-northd-0" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.789485 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eb5764cb-cf47-41e0-9759-d0d894878303-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"eb5764cb-cf47-41e0-9759-d0d894878303\") " pod="openstack/ovn-northd-0" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.790285 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb5764cb-cf47-41e0-9759-d0d894878303-scripts\") pod \"ovn-northd-0\" (UID: \"eb5764cb-cf47-41e0-9759-d0d894878303\") " pod="openstack/ovn-northd-0" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.792120 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb5764cb-cf47-41e0-9759-d0d894878303-config\") pod \"ovn-northd-0\" (UID: \"eb5764cb-cf47-41e0-9759-d0d894878303\") " pod="openstack/ovn-northd-0" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.800182 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb5764cb-cf47-41e0-9759-d0d894878303-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"eb5764cb-cf47-41e0-9759-d0d894878303\") " pod="openstack/ovn-northd-0" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.802208 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb5764cb-cf47-41e0-9759-d0d894878303-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"eb5764cb-cf47-41e0-9759-d0d894878303\") " pod="openstack/ovn-northd-0" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.812853 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb5764cb-cf47-41e0-9759-d0d894878303-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"eb5764cb-cf47-41e0-9759-d0d894878303\") " pod="openstack/ovn-northd-0" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.822302 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g75j5\" (UniqueName: \"kubernetes.io/projected/eb5764cb-cf47-41e0-9759-d0d894878303-kube-api-access-g75j5\") pod \"ovn-northd-0\" (UID: \"eb5764cb-cf47-41e0-9759-d0d894878303\") " pod="openstack/ovn-northd-0" Jan 28 15:37:53 crc kubenswrapper[4871]: I0128 15:37:53.899451 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 15:37:54 crc kubenswrapper[4871]: I0128 15:37:54.009638 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pbkn6"] Jan 28 15:37:54 crc kubenswrapper[4871]: I0128 15:37:54.167458 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 15:37:54 crc kubenswrapper[4871]: W0128 15:37:54.174562 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb5764cb_cf47_41e0_9759_d0d894878303.slice/crio-7ce027f0a32a5282855b05baf2f614f3ffc6f038da608a32dcf8d5187101c208 WatchSource:0}: Error finding container 7ce027f0a32a5282855b05baf2f614f3ffc6f038da608a32dcf8d5187101c208: Status 404 returned error can't find the container with id 7ce027f0a32a5282855b05baf2f614f3ffc6f038da608a32dcf8d5187101c208 Jan 28 15:37:54 crc kubenswrapper[4871]: I0128 15:37:54.313076 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pbkn6" event={"ID":"9a0683a9-f1aa-4456-9fe3-31cb5eacca57","Type":"ContainerStarted","Data":"4712d33e245c90713521b4f4c39b4b38850cc29b9e7828bdedfa85ec129de186"} Jan 28 15:37:54 crc kubenswrapper[4871]: I0128 15:37:54.313120 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pbkn6" event={"ID":"9a0683a9-f1aa-4456-9fe3-31cb5eacca57","Type":"ContainerStarted","Data":"287e338f6fe860bcfce57a37864ed816e872e5d3f327c19295093cb389728738"} Jan 28 15:37:54 crc kubenswrapper[4871]: I0128 15:37:54.314446 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"eb5764cb-cf47-41e0-9759-d0d894878303","Type":"ContainerStarted","Data":"7ce027f0a32a5282855b05baf2f614f3ffc6f038da608a32dcf8d5187101c208"} Jan 28 15:37:54 crc kubenswrapper[4871]: I0128 15:37:54.334996 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-pbkn6" podStartSLOduration=1.334978936 podStartE2EDuration="1.334978936s" podCreationTimestamp="2026-01-28 15:37:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:37:54.329714561 +0000 UTC m=+1226.225552883" watchObservedRunningTime="2026-01-28 15:37:54.334978936 +0000 UTC m=+1226.230817258" Jan 28 15:37:54 crc kubenswrapper[4871]: I0128 15:37:54.806962 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6e17493-c4b5-417e-b5b2-42a1a245447e-etc-swift\") pod \"swift-storage-0\" (UID: \"a6e17493-c4b5-417e-b5b2-42a1a245447e\") " pod="openstack/swift-storage-0" Jan 28 15:37:54 crc kubenswrapper[4871]: I0128 15:37:54.924136 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6e17493-c4b5-417e-b5b2-42a1a245447e-etc-swift\") pod \"swift-storage-0\" (UID: \"a6e17493-c4b5-417e-b5b2-42a1a245447e\") " pod="openstack/swift-storage-0" Jan 28 15:37:55 crc kubenswrapper[4871]: I0128 15:37:55.032564 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 15:37:55 crc kubenswrapper[4871]: I0128 15:37:55.358095 4871 generic.go:334] "Generic (PLEG): container finished" podID="9a0683a9-f1aa-4456-9fe3-31cb5eacca57" containerID="4712d33e245c90713521b4f4c39b4b38850cc29b9e7828bdedfa85ec129de186" exitCode=0 Jan 28 15:37:55 crc kubenswrapper[4871]: I0128 15:37:55.358503 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pbkn6" event={"ID":"9a0683a9-f1aa-4456-9fe3-31cb5eacca57","Type":"ContainerDied","Data":"4712d33e245c90713521b4f4c39b4b38850cc29b9e7828bdedfa85ec129de186"} Jan 28 15:37:55 crc kubenswrapper[4871]: I0128 15:37:55.360123 4871 generic.go:334] "Generic (PLEG): container finished" podID="b5a578f5-c09e-40cd-b9b6-36b7b1f61370" containerID="100853051cefc922a33ccaa7920e973ecc01eddc46339cf81ba93b62359e560e" exitCode=0 Jan 28 15:37:55 crc kubenswrapper[4871]: I0128 15:37:55.360146 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4jm4z" event={"ID":"b5a578f5-c09e-40cd-b9b6-36b7b1f61370","Type":"ContainerDied","Data":"100853051cefc922a33ccaa7920e973ecc01eddc46339cf81ba93b62359e560e"} Jan 28 15:37:56 crc kubenswrapper[4871]: I0128 15:37:56.330396 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 28 15:37:56 crc kubenswrapper[4871]: I0128 15:37:56.379178 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"eb5764cb-cf47-41e0-9759-d0d894878303","Type":"ContainerStarted","Data":"0c4bcb759f002eec894807ab49a05a5f7b5347ae05e3c5d9a5ea5b1130eb2446"} Jan 28 15:37:56 crc kubenswrapper[4871]: I0128 15:37:56.379222 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"eb5764cb-cf47-41e0-9759-d0d894878303","Type":"ContainerStarted","Data":"ea0158cf69ee0c114a2061e5432eba187f642dd9a9f471022757e8c6889b143e"} Jan 28 15:37:56 crc kubenswrapper[4871]: I0128 15:37:56.380189 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 28 15:37:56 crc kubenswrapper[4871]: I0128 15:37:56.389196 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6e17493-c4b5-417e-b5b2-42a1a245447e","Type":"ContainerStarted","Data":"727a175843223d0b4a7c640a547a98c616f5c94f6ec674f1b9aeea6466ff3f43"} Jan 28 15:37:56 crc kubenswrapper[4871]: I0128 15:37:56.403314 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.2093660059999998 podStartE2EDuration="3.403292924s" podCreationTimestamp="2026-01-28 15:37:53 +0000 UTC" firstStartedPulling="2026-01-28 15:37:54.178127878 +0000 UTC m=+1226.073966210" lastFinishedPulling="2026-01-28 15:37:55.372054806 +0000 UTC m=+1227.267893128" observedRunningTime="2026-01-28 15:37:56.401089355 +0000 UTC m=+1228.296927697" watchObservedRunningTime="2026-01-28 15:37:56.403292924 +0000 UTC m=+1228.299131246" Jan 28 15:37:56 crc kubenswrapper[4871]: I0128 15:37:56.917097 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-s99pp"] Jan 28 15:37:56 crc kubenswrapper[4871]: I0128 15:37:56.918523 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-s99pp" Jan 28 15:37:56 crc kubenswrapper[4871]: I0128 15:37:56.921356 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-swrvp" Jan 28 15:37:56 crc kubenswrapper[4871]: I0128 15:37:56.921654 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 28 15:37:56 crc kubenswrapper[4871]: I0128 15:37:56.929495 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-s99pp"] Jan 28 15:37:56 crc kubenswrapper[4871]: I0128 15:37:56.956943 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pbkn6" Jan 28 15:37:56 crc kubenswrapper[4871]: I0128 15:37:56.962370 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.059133 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a0683a9-f1aa-4456-9fe3-31cb5eacca57-operator-scripts\") pod \"9a0683a9-f1aa-4456-9fe3-31cb5eacca57\" (UID: \"9a0683a9-f1aa-4456-9fe3-31cb5eacca57\") " Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.059287 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8dcp\" (UniqueName: \"kubernetes.io/projected/9a0683a9-f1aa-4456-9fe3-31cb5eacca57-kube-api-access-t8dcp\") pod \"9a0683a9-f1aa-4456-9fe3-31cb5eacca57\" (UID: \"9a0683a9-f1aa-4456-9fe3-31cb5eacca57\") " Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.059892 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a0683a9-f1aa-4456-9fe3-31cb5eacca57-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a0683a9-f1aa-4456-9fe3-31cb5eacca57" (UID: "9a0683a9-f1aa-4456-9fe3-31cb5eacca57"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.059897 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02722604-9d90-40f1-9518-ee221fecdca0-combined-ca-bundle\") pod \"glance-db-sync-s99pp\" (UID: \"02722604-9d90-40f1-9518-ee221fecdca0\") " pod="openstack/glance-db-sync-s99pp" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.060163 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bthhn\" (UniqueName: \"kubernetes.io/projected/02722604-9d90-40f1-9518-ee221fecdca0-kube-api-access-bthhn\") pod \"glance-db-sync-s99pp\" (UID: \"02722604-9d90-40f1-9518-ee221fecdca0\") " pod="openstack/glance-db-sync-s99pp" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.060193 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02722604-9d90-40f1-9518-ee221fecdca0-config-data\") pod \"glance-db-sync-s99pp\" (UID: \"02722604-9d90-40f1-9518-ee221fecdca0\") " pod="openstack/glance-db-sync-s99pp" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.060398 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/02722604-9d90-40f1-9518-ee221fecdca0-db-sync-config-data\") pod \"glance-db-sync-s99pp\" (UID: \"02722604-9d90-40f1-9518-ee221fecdca0\") " pod="openstack/glance-db-sync-s99pp" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.060740 4871 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a0683a9-f1aa-4456-9fe3-31cb5eacca57-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.064861 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a0683a9-f1aa-4456-9fe3-31cb5eacca57-kube-api-access-t8dcp" (OuterVolumeSpecName: "kube-api-access-t8dcp") pod "9a0683a9-f1aa-4456-9fe3-31cb5eacca57" (UID: "9a0683a9-f1aa-4456-9fe3-31cb5eacca57"). InnerVolumeSpecName "kube-api-access-t8dcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.161880 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-ring-data-devices\") pod \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.161937 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-swiftconf\") pod \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.161979 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpff7\" (UniqueName: \"kubernetes.io/projected/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-kube-api-access-lpff7\") pod \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.162078 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-combined-ca-bundle\") pod \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.162113 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-scripts\") pod \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.162147 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-etc-swift\") pod \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.162358 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b5a578f5-c09e-40cd-b9b6-36b7b1f61370" (UID: "b5a578f5-c09e-40cd-b9b6-36b7b1f61370"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.163168 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b5a578f5-c09e-40cd-b9b6-36b7b1f61370" (UID: "b5a578f5-c09e-40cd-b9b6-36b7b1f61370"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.163928 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-dispersionconf\") pod \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\" (UID: \"b5a578f5-c09e-40cd-b9b6-36b7b1f61370\") " Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.164517 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02722604-9d90-40f1-9518-ee221fecdca0-combined-ca-bundle\") pod \"glance-db-sync-s99pp\" (UID: \"02722604-9d90-40f1-9518-ee221fecdca0\") " pod="openstack/glance-db-sync-s99pp" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.164820 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bthhn\" (UniqueName: \"kubernetes.io/projected/02722604-9d90-40f1-9518-ee221fecdca0-kube-api-access-bthhn\") pod \"glance-db-sync-s99pp\" (UID: \"02722604-9d90-40f1-9518-ee221fecdca0\") " pod="openstack/glance-db-sync-s99pp" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.164854 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02722604-9d90-40f1-9518-ee221fecdca0-config-data\") pod \"glance-db-sync-s99pp\" (UID: \"02722604-9d90-40f1-9518-ee221fecdca0\") " pod="openstack/glance-db-sync-s99pp" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.164918 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/02722604-9d90-40f1-9518-ee221fecdca0-db-sync-config-data\") pod \"glance-db-sync-s99pp\" (UID: \"02722604-9d90-40f1-9518-ee221fecdca0\") " pod="openstack/glance-db-sync-s99pp" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.164986 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8dcp\" (UniqueName: \"kubernetes.io/projected/9a0683a9-f1aa-4456-9fe3-31cb5eacca57-kube-api-access-t8dcp\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.164999 4871 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.165009 4871 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.166465 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-kube-api-access-lpff7" (OuterVolumeSpecName: "kube-api-access-lpff7") pod "b5a578f5-c09e-40cd-b9b6-36b7b1f61370" (UID: "b5a578f5-c09e-40cd-b9b6-36b7b1f61370"). InnerVolumeSpecName "kube-api-access-lpff7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.169183 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/02722604-9d90-40f1-9518-ee221fecdca0-db-sync-config-data\") pod \"glance-db-sync-s99pp\" (UID: \"02722604-9d90-40f1-9518-ee221fecdca0\") " pod="openstack/glance-db-sync-s99pp" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.169500 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02722604-9d90-40f1-9518-ee221fecdca0-combined-ca-bundle\") pod \"glance-db-sync-s99pp\" (UID: \"02722604-9d90-40f1-9518-ee221fecdca0\") " pod="openstack/glance-db-sync-s99pp" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.171343 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02722604-9d90-40f1-9518-ee221fecdca0-config-data\") pod \"glance-db-sync-s99pp\" (UID: \"02722604-9d90-40f1-9518-ee221fecdca0\") " pod="openstack/glance-db-sync-s99pp" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.176164 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b5a578f5-c09e-40cd-b9b6-36b7b1f61370" (UID: "b5a578f5-c09e-40cd-b9b6-36b7b1f61370"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.183343 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5a578f5-c09e-40cd-b9b6-36b7b1f61370" (UID: "b5a578f5-c09e-40cd-b9b6-36b7b1f61370"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.190770 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bthhn\" (UniqueName: \"kubernetes.io/projected/02722604-9d90-40f1-9518-ee221fecdca0-kube-api-access-bthhn\") pod \"glance-db-sync-s99pp\" (UID: \"02722604-9d90-40f1-9518-ee221fecdca0\") " pod="openstack/glance-db-sync-s99pp" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.190942 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-scripts" (OuterVolumeSpecName: "scripts") pod "b5a578f5-c09e-40cd-b9b6-36b7b1f61370" (UID: "b5a578f5-c09e-40cd-b9b6-36b7b1f61370"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.193817 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b5a578f5-c09e-40cd-b9b6-36b7b1f61370" (UID: "b5a578f5-c09e-40cd-b9b6-36b7b1f61370"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.265970 4871 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.266253 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpff7\" (UniqueName: \"kubernetes.io/projected/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-kube-api-access-lpff7\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.266320 4871 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.266373 4871 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.266426 4871 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b5a578f5-c09e-40cd-b9b6-36b7b1f61370-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.286283 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-s99pp" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.405484 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4jm4z" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.406526 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4jm4z" event={"ID":"b5a578f5-c09e-40cd-b9b6-36b7b1f61370","Type":"ContainerDied","Data":"1272d111416687a400ed50954de483d9902c77ed558665b22f8484ebb96c3462"} Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.406571 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1272d111416687a400ed50954de483d9902c77ed558665b22f8484ebb96c3462" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.417201 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pbkn6" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.418451 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pbkn6" event={"ID":"9a0683a9-f1aa-4456-9fe3-31cb5eacca57","Type":"ContainerDied","Data":"287e338f6fe860bcfce57a37864ed816e872e5d3f327c19295093cb389728738"} Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.418495 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="287e338f6fe860bcfce57a37864ed816e872e5d3f327c19295093cb389728738" Jan 28 15:37:57 crc kubenswrapper[4871]: I0128 15:37:57.809143 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-s99pp"] Jan 28 15:37:57 crc kubenswrapper[4871]: W0128 15:37:57.816044 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02722604_9d90_40f1_9518_ee221fecdca0.slice/crio-b577da67bfa7459a8bbdfac4b0d63bc0ca17e062189263788ef43d7d584abcb4 WatchSource:0}: Error finding container b577da67bfa7459a8bbdfac4b0d63bc0ca17e062189263788ef43d7d584abcb4: Status 404 returned error can't find the container with id b577da67bfa7459a8bbdfac4b0d63bc0ca17e062189263788ef43d7d584abcb4 Jan 28 15:37:58 crc kubenswrapper[4871]: I0128 15:37:58.000898 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 15:37:58 crc kubenswrapper[4871]: I0128 15:37:58.425240 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-s99pp" event={"ID":"02722604-9d90-40f1-9518-ee221fecdca0","Type":"ContainerStarted","Data":"b577da67bfa7459a8bbdfac4b0d63bc0ca17e062189263788ef43d7d584abcb4"} Jan 28 15:37:59 crc kubenswrapper[4871]: I0128 15:37:59.447317 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6e17493-c4b5-417e-b5b2-42a1a245447e","Type":"ContainerStarted","Data":"dd1477e9000c32c6edfecf8b728eec6c3cb967db59c894ffe35cd0dfe3dbba57"} Jan 28 15:37:59 crc kubenswrapper[4871]: I0128 15:37:59.447746 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6e17493-c4b5-417e-b5b2-42a1a245447e","Type":"ContainerStarted","Data":"00d110dca0b812d0a43ca67f3a1971f3642633f2dd8c959e282dcd0cbf40a05c"} Jan 28 15:38:00 crc kubenswrapper[4871]: I0128 15:38:00.242028 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-pbkn6"] Jan 28 15:38:00 crc kubenswrapper[4871]: I0128 15:38:00.249961 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-pbkn6"] Jan 28 15:38:00 crc kubenswrapper[4871]: I0128 15:38:00.461502 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6e17493-c4b5-417e-b5b2-42a1a245447e","Type":"ContainerStarted","Data":"878bce435395aa76c75074df3b3fa39bfbfb04e8f8fdb484dbaa6d2e80380b1a"} Jan 28 15:38:00 crc kubenswrapper[4871]: I0128 15:38:00.915490 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a0683a9-f1aa-4456-9fe3-31cb5eacca57" path="/var/lib/kubelet/pods/9a0683a9-f1aa-4456-9fe3-31cb5eacca57/volumes" Jan 28 15:38:01 crc kubenswrapper[4871]: I0128 15:38:01.474513 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6e17493-c4b5-417e-b5b2-42a1a245447e","Type":"ContainerStarted","Data":"42368970d548864e35da6e6b4d59ab1c50c5ecb6663763994c951ee03976c723"} Jan 28 15:38:02 crc kubenswrapper[4871]: I0128 15:38:02.506111 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6e17493-c4b5-417e-b5b2-42a1a245447e","Type":"ContainerStarted","Data":"8655aa2f5ce8ac0eccc5db294459841c5933a347100d30fb7f577d15293a853d"} Jan 28 15:38:04 crc kubenswrapper[4871]: I0128 15:38:04.627432 4871 generic.go:334] "Generic (PLEG): container finished" podID="48f16980-86d0-4648-9ebd-a428b5253832" containerID="997dac3993900059d2b662b7d41e85dc593e867353b7064b650a90e524340637" exitCode=0 Jan 28 15:38:04 crc kubenswrapper[4871]: I0128 15:38:04.627468 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"48f16980-86d0-4648-9ebd-a428b5253832","Type":"ContainerDied","Data":"997dac3993900059d2b662b7d41e85dc593e867353b7064b650a90e524340637"} Jan 28 15:38:04 crc kubenswrapper[4871]: I0128 15:38:04.631279 4871 generic.go:334] "Generic (PLEG): container finished" podID="1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2" containerID="c53190f3262b5642f7ee119e0f5ec89c6c9f1943ff1790f4cbd3711f966b2bf8" exitCode=0 Jan 28 15:38:04 crc kubenswrapper[4871]: I0128 15:38:04.631321 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2","Type":"ContainerDied","Data":"c53190f3262b5642f7ee119e0f5ec89c6c9f1943ff1790f4cbd3711f966b2bf8"} Jan 28 15:38:04 crc kubenswrapper[4871]: I0128 15:38:04.639001 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6e17493-c4b5-417e-b5b2-42a1a245447e","Type":"ContainerStarted","Data":"fbb98755781652661dc2f53a0a6b127efcad433fb9d8dbd73001ad0cb312c758"} Jan 28 15:38:05 crc kubenswrapper[4871]: I0128 15:38:05.023794 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-whvv9"] Jan 28 15:38:05 crc kubenswrapper[4871]: E0128 15:38:05.024138 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a578f5-c09e-40cd-b9b6-36b7b1f61370" containerName="swift-ring-rebalance" Jan 28 15:38:05 crc kubenswrapper[4871]: I0128 15:38:05.024155 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a578f5-c09e-40cd-b9b6-36b7b1f61370" containerName="swift-ring-rebalance" Jan 28 15:38:05 crc kubenswrapper[4871]: E0128 15:38:05.024175 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0683a9-f1aa-4456-9fe3-31cb5eacca57" containerName="mariadb-account-create-update" Jan 28 15:38:05 crc kubenswrapper[4871]: I0128 15:38:05.024182 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0683a9-f1aa-4456-9fe3-31cb5eacca57" containerName="mariadb-account-create-update" Jan 28 15:38:05 crc kubenswrapper[4871]: I0128 15:38:05.024323 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5a578f5-c09e-40cd-b9b6-36b7b1f61370" containerName="swift-ring-rebalance" Jan 28 15:38:05 crc kubenswrapper[4871]: I0128 15:38:05.024357 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0683a9-f1aa-4456-9fe3-31cb5eacca57" containerName="mariadb-account-create-update" Jan 28 15:38:05 crc kubenswrapper[4871]: I0128 15:38:05.028858 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-whvv9" Jan 28 15:38:05 crc kubenswrapper[4871]: I0128 15:38:05.032045 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 28 15:38:05 crc kubenswrapper[4871]: I0128 15:38:05.042581 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-whvv9"] Jan 28 15:38:05 crc kubenswrapper[4871]: I0128 15:38:05.282406 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/795d6166-93a3-4371-a502-b18f09b9374f-operator-scripts\") pod \"root-account-create-update-whvv9\" (UID: \"795d6166-93a3-4371-a502-b18f09b9374f\") " pod="openstack/root-account-create-update-whvv9" Jan 28 15:38:05 crc kubenswrapper[4871]: I0128 15:38:05.282457 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4f6b\" (UniqueName: \"kubernetes.io/projected/795d6166-93a3-4371-a502-b18f09b9374f-kube-api-access-j4f6b\") pod \"root-account-create-update-whvv9\" (UID: \"795d6166-93a3-4371-a502-b18f09b9374f\") " pod="openstack/root-account-create-update-whvv9" Jan 28 15:38:05 crc kubenswrapper[4871]: I0128 15:38:05.486063 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/795d6166-93a3-4371-a502-b18f09b9374f-operator-scripts\") pod \"root-account-create-update-whvv9\" (UID: \"795d6166-93a3-4371-a502-b18f09b9374f\") " pod="openstack/root-account-create-update-whvv9" Jan 28 15:38:05 crc kubenswrapper[4871]: I0128 15:38:05.486497 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4f6b\" (UniqueName: \"kubernetes.io/projected/795d6166-93a3-4371-a502-b18f09b9374f-kube-api-access-j4f6b\") pod \"root-account-create-update-whvv9\" (UID: \"795d6166-93a3-4371-a502-b18f09b9374f\") " pod="openstack/root-account-create-update-whvv9" Jan 28 15:38:05 crc kubenswrapper[4871]: I0128 15:38:05.487703 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/795d6166-93a3-4371-a502-b18f09b9374f-operator-scripts\") pod \"root-account-create-update-whvv9\" (UID: \"795d6166-93a3-4371-a502-b18f09b9374f\") " pod="openstack/root-account-create-update-whvv9" Jan 28 15:38:05 crc kubenswrapper[4871]: I0128 15:38:05.510420 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4f6b\" (UniqueName: \"kubernetes.io/projected/795d6166-93a3-4371-a502-b18f09b9374f-kube-api-access-j4f6b\") pod \"root-account-create-update-whvv9\" (UID: \"795d6166-93a3-4371-a502-b18f09b9374f\") " pod="openstack/root-account-create-update-whvv9" Jan 28 15:38:05 crc kubenswrapper[4871]: I0128 15:38:05.727155 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-whvv9" Jan 28 15:38:05 crc kubenswrapper[4871]: I0128 15:38:05.747342 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6e17493-c4b5-417e-b5b2-42a1a245447e","Type":"ContainerStarted","Data":"750d8db948c555bb87bbf0e0c4672b0213c6df893fe1d6752f76274035c33e34"} Jan 28 15:38:05 crc kubenswrapper[4871]: I0128 15:38:05.747422 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6e17493-c4b5-417e-b5b2-42a1a245447e","Type":"ContainerStarted","Data":"2ff1f910bb9f1a64a4f89b5beb1d9944199ce0dbc547b38af1019c183eeed2c2"} Jan 28 15:38:05 crc kubenswrapper[4871]: I0128 15:38:05.749371 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"48f16980-86d0-4648-9ebd-a428b5253832","Type":"ContainerStarted","Data":"1b3192e3ea38b3eb5ab945c4d45874658918197c50aa8d080fb46122566a6392"} Jan 28 15:38:05 crc kubenswrapper[4871]: I0128 15:38:05.750005 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:38:05 crc kubenswrapper[4871]: I0128 15:38:05.768934 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2","Type":"ContainerStarted","Data":"8aa7c2977a09110051965b4187757bb0f3ea896de0c050e6004a5ca9bb095d80"} Jan 28 15:38:05 crc kubenswrapper[4871]: I0128 15:38:05.769299 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 28 15:38:05 crc kubenswrapper[4871]: I0128 15:38:05.790112 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=57.67470469 podStartE2EDuration="1m14.790082443s" podCreationTimestamp="2026-01-28 15:36:51 +0000 UTC" firstStartedPulling="2026-01-28 15:37:12.51064037 +0000 UTC m=+1184.406478702" lastFinishedPulling="2026-01-28 15:37:29.626018133 +0000 UTC m=+1201.521856455" observedRunningTime="2026-01-28 15:38:05.776384762 +0000 UTC m=+1237.672223084" watchObservedRunningTime="2026-01-28 15:38:05.790082443 +0000 UTC m=+1237.685920765" Jan 28 15:38:05 crc kubenswrapper[4871]: I0128 15:38:05.886997 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=57.311876978 podStartE2EDuration="1m14.886952753s" podCreationTimestamp="2026-01-28 15:36:51 +0000 UTC" firstStartedPulling="2026-01-28 15:37:12.175249841 +0000 UTC m=+1184.071088163" lastFinishedPulling="2026-01-28 15:37:29.750325616 +0000 UTC m=+1201.646163938" observedRunningTime="2026-01-28 15:38:05.88178857 +0000 UTC m=+1237.777626882" watchObservedRunningTime="2026-01-28 15:38:05.886952753 +0000 UTC m=+1237.782791075" Jan 28 15:38:06 crc kubenswrapper[4871]: I0128 15:38:06.976923 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-whvv9"] Jan 28 15:38:06 crc kubenswrapper[4871]: W0128 15:38:06.984985 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod795d6166_93a3_4371_a502_b18f09b9374f.slice/crio-c365cee1c26392742929db72ce71a0fc336155c10086424db9b5f8bdeff727bc WatchSource:0}: Error finding container c365cee1c26392742929db72ce71a0fc336155c10086424db9b5f8bdeff727bc: Status 404 returned error can't find the container with id c365cee1c26392742929db72ce71a0fc336155c10086424db9b5f8bdeff727bc Jan 28 15:38:07 crc kubenswrapper[4871]: I0128 15:38:07.801959 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-whvv9" event={"ID":"795d6166-93a3-4371-a502-b18f09b9374f","Type":"ContainerStarted","Data":"f084090db21acc11080672ab9665821f725f10cf65033b57ac0b0890b0e859a1"} Jan 28 15:38:07 crc kubenswrapper[4871]: I0128 15:38:07.802283 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-whvv9" event={"ID":"795d6166-93a3-4371-a502-b18f09b9374f","Type":"ContainerStarted","Data":"c365cee1c26392742929db72ce71a0fc336155c10086424db9b5f8bdeff727bc"} Jan 28 15:38:07 crc kubenswrapper[4871]: I0128 15:38:07.985387 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-whvv9" podStartSLOduration=2.985361948 podStartE2EDuration="2.985361948s" podCreationTimestamp="2026-01-28 15:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:38:07.969816789 +0000 UTC m=+1239.865655111" watchObservedRunningTime="2026-01-28 15:38:07.985361948 +0000 UTC m=+1239.881200270" Jan 28 15:38:09 crc kubenswrapper[4871]: I0128 15:38:09.817847 4871 generic.go:334] "Generic (PLEG): container finished" podID="795d6166-93a3-4371-a502-b18f09b9374f" containerID="f084090db21acc11080672ab9665821f725f10cf65033b57ac0b0890b0e859a1" exitCode=0 Jan 28 15:38:09 crc kubenswrapper[4871]: I0128 15:38:09.818195 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-whvv9" event={"ID":"795d6166-93a3-4371-a502-b18f09b9374f","Type":"ContainerDied","Data":"f084090db21acc11080672ab9665821f725f10cf65033b57ac0b0890b0e859a1"} Jan 28 15:38:11 crc kubenswrapper[4871]: I0128 15:38:11.656957 4871 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-2s4s6" podUID="10434904-135c-4ec2-a483-1647ce52500b" containerName="ovn-controller" probeResult="failure" output=< Jan 28 15:38:11 crc kubenswrapper[4871]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 28 15:38:11 crc kubenswrapper[4871]: > Jan 28 15:38:13 crc kubenswrapper[4871]: I0128 15:38:13.814410 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:38:13 crc kubenswrapper[4871]: I0128 15:38:13.814773 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:38:13 crc kubenswrapper[4871]: I0128 15:38:13.967788 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 28 15:38:16 crc kubenswrapper[4871]: I0128 15:38:16.663366 4871 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-2s4s6" podUID="10434904-135c-4ec2-a483-1647ce52500b" containerName="ovn-controller" probeResult="failure" output=< Jan 28 15:38:16 crc kubenswrapper[4871]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 28 15:38:16 crc kubenswrapper[4871]: > Jan 28 15:38:16 crc kubenswrapper[4871]: I0128 15:38:16.703247 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-c2xpq" Jan 28 15:38:16 crc kubenswrapper[4871]: I0128 15:38:16.705756 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-c2xpq" Jan 28 15:38:16 crc kubenswrapper[4871]: I0128 15:38:16.970680 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-2s4s6-config-vk6nh"] Jan 28 15:38:16 crc kubenswrapper[4871]: I0128 15:38:16.971937 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2s4s6-config-vk6nh" Jan 28 15:38:16 crc kubenswrapper[4871]: I0128 15:38:16.976874 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 28 15:38:16 crc kubenswrapper[4871]: I0128 15:38:16.978340 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2s4s6-config-vk6nh"] Jan 28 15:38:17 crc kubenswrapper[4871]: I0128 15:38:17.140847 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a4f1ee34-adf8-4395-97d0-31eef5edde88-additional-scripts\") pod \"ovn-controller-2s4s6-config-vk6nh\" (UID: \"a4f1ee34-adf8-4395-97d0-31eef5edde88\") " pod="openstack/ovn-controller-2s4s6-config-vk6nh" Jan 28 15:38:17 crc kubenswrapper[4871]: I0128 15:38:17.140933 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a4f1ee34-adf8-4395-97d0-31eef5edde88-var-run\") pod \"ovn-controller-2s4s6-config-vk6nh\" (UID: \"a4f1ee34-adf8-4395-97d0-31eef5edde88\") " pod="openstack/ovn-controller-2s4s6-config-vk6nh" Jan 28 15:38:17 crc kubenswrapper[4871]: I0128 15:38:17.141057 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wv96\" (UniqueName: \"kubernetes.io/projected/a4f1ee34-adf8-4395-97d0-31eef5edde88-kube-api-access-8wv96\") pod \"ovn-controller-2s4s6-config-vk6nh\" (UID: \"a4f1ee34-adf8-4395-97d0-31eef5edde88\") " pod="openstack/ovn-controller-2s4s6-config-vk6nh" Jan 28 15:38:17 crc kubenswrapper[4871]: I0128 15:38:17.141088 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a4f1ee34-adf8-4395-97d0-31eef5edde88-var-log-ovn\") pod \"ovn-controller-2s4s6-config-vk6nh\" (UID: \"a4f1ee34-adf8-4395-97d0-31eef5edde88\") " pod="openstack/ovn-controller-2s4s6-config-vk6nh" Jan 28 15:38:17 crc kubenswrapper[4871]: I0128 15:38:17.141127 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4f1ee34-adf8-4395-97d0-31eef5edde88-scripts\") pod \"ovn-controller-2s4s6-config-vk6nh\" (UID: \"a4f1ee34-adf8-4395-97d0-31eef5edde88\") " pod="openstack/ovn-controller-2s4s6-config-vk6nh" Jan 28 15:38:17 crc kubenswrapper[4871]: I0128 15:38:17.141158 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a4f1ee34-adf8-4395-97d0-31eef5edde88-var-run-ovn\") pod \"ovn-controller-2s4s6-config-vk6nh\" (UID: \"a4f1ee34-adf8-4395-97d0-31eef5edde88\") " pod="openstack/ovn-controller-2s4s6-config-vk6nh" Jan 28 15:38:17 crc kubenswrapper[4871]: I0128 15:38:17.243114 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a4f1ee34-adf8-4395-97d0-31eef5edde88-var-run\") pod \"ovn-controller-2s4s6-config-vk6nh\" (UID: \"a4f1ee34-adf8-4395-97d0-31eef5edde88\") " pod="openstack/ovn-controller-2s4s6-config-vk6nh" Jan 28 15:38:17 crc kubenswrapper[4871]: I0128 15:38:17.243226 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wv96\" (UniqueName: \"kubernetes.io/projected/a4f1ee34-adf8-4395-97d0-31eef5edde88-kube-api-access-8wv96\") pod \"ovn-controller-2s4s6-config-vk6nh\" (UID: \"a4f1ee34-adf8-4395-97d0-31eef5edde88\") " pod="openstack/ovn-controller-2s4s6-config-vk6nh" Jan 28 15:38:17 crc kubenswrapper[4871]: I0128 15:38:17.243258 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a4f1ee34-adf8-4395-97d0-31eef5edde88-var-log-ovn\") pod \"ovn-controller-2s4s6-config-vk6nh\" (UID: \"a4f1ee34-adf8-4395-97d0-31eef5edde88\") " pod="openstack/ovn-controller-2s4s6-config-vk6nh" Jan 28 15:38:17 crc kubenswrapper[4871]: I0128 15:38:17.243281 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4f1ee34-adf8-4395-97d0-31eef5edde88-scripts\") pod \"ovn-controller-2s4s6-config-vk6nh\" (UID: \"a4f1ee34-adf8-4395-97d0-31eef5edde88\") " pod="openstack/ovn-controller-2s4s6-config-vk6nh" Jan 28 15:38:17 crc kubenswrapper[4871]: I0128 15:38:17.243308 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a4f1ee34-adf8-4395-97d0-31eef5edde88-var-run-ovn\") pod \"ovn-controller-2s4s6-config-vk6nh\" (UID: \"a4f1ee34-adf8-4395-97d0-31eef5edde88\") " pod="openstack/ovn-controller-2s4s6-config-vk6nh" Jan 28 15:38:17 crc kubenswrapper[4871]: I0128 15:38:17.243351 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a4f1ee34-adf8-4395-97d0-31eef5edde88-additional-scripts\") pod \"ovn-controller-2s4s6-config-vk6nh\" (UID: \"a4f1ee34-adf8-4395-97d0-31eef5edde88\") " pod="openstack/ovn-controller-2s4s6-config-vk6nh" Jan 28 15:38:17 crc kubenswrapper[4871]: I0128 15:38:17.243478 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a4f1ee34-adf8-4395-97d0-31eef5edde88-var-run\") pod \"ovn-controller-2s4s6-config-vk6nh\" (UID: \"a4f1ee34-adf8-4395-97d0-31eef5edde88\") " pod="openstack/ovn-controller-2s4s6-config-vk6nh" Jan 28 15:38:17 crc kubenswrapper[4871]: I0128 15:38:17.243521 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a4f1ee34-adf8-4395-97d0-31eef5edde88-var-log-ovn\") pod \"ovn-controller-2s4s6-config-vk6nh\" (UID: \"a4f1ee34-adf8-4395-97d0-31eef5edde88\") " pod="openstack/ovn-controller-2s4s6-config-vk6nh" Jan 28 15:38:17 crc kubenswrapper[4871]: I0128 15:38:17.243606 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a4f1ee34-adf8-4395-97d0-31eef5edde88-var-run-ovn\") pod \"ovn-controller-2s4s6-config-vk6nh\" (UID: \"a4f1ee34-adf8-4395-97d0-31eef5edde88\") " pod="openstack/ovn-controller-2s4s6-config-vk6nh" Jan 28 15:38:17 crc kubenswrapper[4871]: I0128 15:38:17.245040 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a4f1ee34-adf8-4395-97d0-31eef5edde88-additional-scripts\") pod \"ovn-controller-2s4s6-config-vk6nh\" (UID: \"a4f1ee34-adf8-4395-97d0-31eef5edde88\") " pod="openstack/ovn-controller-2s4s6-config-vk6nh" Jan 28 15:38:17 crc kubenswrapper[4871]: I0128 15:38:17.246051 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4f1ee34-adf8-4395-97d0-31eef5edde88-scripts\") pod \"ovn-controller-2s4s6-config-vk6nh\" (UID: \"a4f1ee34-adf8-4395-97d0-31eef5edde88\") " pod="openstack/ovn-controller-2s4s6-config-vk6nh" Jan 28 15:38:17 crc kubenswrapper[4871]: I0128 15:38:17.264805 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wv96\" (UniqueName: \"kubernetes.io/projected/a4f1ee34-adf8-4395-97d0-31eef5edde88-kube-api-access-8wv96\") pod \"ovn-controller-2s4s6-config-vk6nh\" (UID: \"a4f1ee34-adf8-4395-97d0-31eef5edde88\") " pod="openstack/ovn-controller-2s4s6-config-vk6nh" Jan 28 15:38:17 crc kubenswrapper[4871]: I0128 15:38:17.292455 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2s4s6-config-vk6nh" Jan 28 15:38:18 crc kubenswrapper[4871]: E0128 15:38:18.608903 4871 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 28 15:38:18 crc kubenswrapper[4871]: E0128 15:38:18.609065 4871 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bthhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-s99pp_openstack(02722604-9d90-40f1-9518-ee221fecdca0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:38:18 crc kubenswrapper[4871]: E0128 15:38:18.611766 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-s99pp" podUID="02722604-9d90-40f1-9518-ee221fecdca0" Jan 28 15:38:18 crc kubenswrapper[4871]: I0128 15:38:18.730162 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-whvv9" Jan 28 15:38:18 crc kubenswrapper[4871]: I0128 15:38:18.867387 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4f6b\" (UniqueName: \"kubernetes.io/projected/795d6166-93a3-4371-a502-b18f09b9374f-kube-api-access-j4f6b\") pod \"795d6166-93a3-4371-a502-b18f09b9374f\" (UID: \"795d6166-93a3-4371-a502-b18f09b9374f\") " Jan 28 15:38:18 crc kubenswrapper[4871]: I0128 15:38:18.867475 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/795d6166-93a3-4371-a502-b18f09b9374f-operator-scripts\") pod \"795d6166-93a3-4371-a502-b18f09b9374f\" (UID: \"795d6166-93a3-4371-a502-b18f09b9374f\") " Jan 28 15:38:18 crc kubenswrapper[4871]: I0128 15:38:18.868774 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/795d6166-93a3-4371-a502-b18f09b9374f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "795d6166-93a3-4371-a502-b18f09b9374f" (UID: "795d6166-93a3-4371-a502-b18f09b9374f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:18 crc kubenswrapper[4871]: I0128 15:38:18.887181 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/795d6166-93a3-4371-a502-b18f09b9374f-kube-api-access-j4f6b" (OuterVolumeSpecName: "kube-api-access-j4f6b") pod "795d6166-93a3-4371-a502-b18f09b9374f" (UID: "795d6166-93a3-4371-a502-b18f09b9374f"). InnerVolumeSpecName "kube-api-access-j4f6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:18 crc kubenswrapper[4871]: I0128 15:38:18.916802 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6e17493-c4b5-417e-b5b2-42a1a245447e","Type":"ContainerStarted","Data":"710539d5bd45d60db0b3cfba5f2176e511a957559e546fede9beac4137f37dc0"} Jan 28 15:38:18 crc kubenswrapper[4871]: I0128 15:38:18.919725 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-whvv9" event={"ID":"795d6166-93a3-4371-a502-b18f09b9374f","Type":"ContainerDied","Data":"c365cee1c26392742929db72ce71a0fc336155c10086424db9b5f8bdeff727bc"} Jan 28 15:38:18 crc kubenswrapper[4871]: I0128 15:38:18.919771 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-whvv9" Jan 28 15:38:18 crc kubenswrapper[4871]: I0128 15:38:18.919782 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c365cee1c26392742929db72ce71a0fc336155c10086424db9b5f8bdeff727bc" Jan 28 15:38:18 crc kubenswrapper[4871]: E0128 15:38:18.922099 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-s99pp" podUID="02722604-9d90-40f1-9518-ee221fecdca0" Jan 28 15:38:18 crc kubenswrapper[4871]: I0128 15:38:18.968967 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4f6b\" (UniqueName: \"kubernetes.io/projected/795d6166-93a3-4371-a502-b18f09b9374f-kube-api-access-j4f6b\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:18 crc kubenswrapper[4871]: I0128 15:38:18.969004 4871 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/795d6166-93a3-4371-a502-b18f09b9374f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:19 crc kubenswrapper[4871]: I0128 15:38:19.117091 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2s4s6-config-vk6nh"] Jan 28 15:38:19 crc kubenswrapper[4871]: W0128 15:38:19.127076 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4f1ee34_adf8_4395_97d0_31eef5edde88.slice/crio-c30c9f37e89dfad2efae7130a42c84261d32dd7168498e6386208faf36a4edac WatchSource:0}: Error finding container c30c9f37e89dfad2efae7130a42c84261d32dd7168498e6386208faf36a4edac: Status 404 returned error can't find the container with id c30c9f37e89dfad2efae7130a42c84261d32dd7168498e6386208faf36a4edac Jan 28 15:38:19 crc kubenswrapper[4871]: I0128 15:38:19.938283 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6e17493-c4b5-417e-b5b2-42a1a245447e","Type":"ContainerStarted","Data":"1c3fdeebc191eabefdeeebb4da438c894fdfb4186b89e4a9085f494364d4da09"} Jan 28 15:38:19 crc kubenswrapper[4871]: I0128 15:38:19.938769 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6e17493-c4b5-417e-b5b2-42a1a245447e","Type":"ContainerStarted","Data":"a43d631e23edc99bb7050b8b5c1a1667eaf59c576f95f65dfb7c51181eeb74d1"} Jan 28 15:38:19 crc kubenswrapper[4871]: I0128 15:38:19.938782 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6e17493-c4b5-417e-b5b2-42a1a245447e","Type":"ContainerStarted","Data":"4054e6ce8157ded1ffdc3bc7de126c5884926b4668dadc691a639ec7e91b3b53"} Jan 28 15:38:19 crc kubenswrapper[4871]: I0128 15:38:19.938795 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6e17493-c4b5-417e-b5b2-42a1a245447e","Type":"ContainerStarted","Data":"8bd86ca149df47a13bb958a989c2ad0fe88304880235a4801b255b9c1610558e"} Jan 28 15:38:19 crc kubenswrapper[4871]: I0128 15:38:19.941893 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2s4s6-config-vk6nh" event={"ID":"a4f1ee34-adf8-4395-97d0-31eef5edde88","Type":"ContainerStarted","Data":"de425b69a7249fd5458c804684161b32a996773362a4c2597bae28654c7a82be"} Jan 28 15:38:19 crc kubenswrapper[4871]: I0128 15:38:19.941934 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2s4s6-config-vk6nh" event={"ID":"a4f1ee34-adf8-4395-97d0-31eef5edde88","Type":"ContainerStarted","Data":"c30c9f37e89dfad2efae7130a42c84261d32dd7168498e6386208faf36a4edac"} Jan 28 15:38:19 crc kubenswrapper[4871]: I0128 15:38:19.961217 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-2s4s6-config-vk6nh" podStartSLOduration=3.961192429 podStartE2EDuration="3.961192429s" podCreationTimestamp="2026-01-28 15:38:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:38:19.960531859 +0000 UTC m=+1251.856370181" watchObservedRunningTime="2026-01-28 15:38:19.961192429 +0000 UTC m=+1251.857030751" Jan 28 15:38:20 crc kubenswrapper[4871]: I0128 15:38:20.950975 4871 generic.go:334] "Generic (PLEG): container finished" podID="a4f1ee34-adf8-4395-97d0-31eef5edde88" containerID="de425b69a7249fd5458c804684161b32a996773362a4c2597bae28654c7a82be" exitCode=0 Jan 28 15:38:20 crc kubenswrapper[4871]: I0128 15:38:20.951067 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2s4s6-config-vk6nh" event={"ID":"a4f1ee34-adf8-4395-97d0-31eef5edde88","Type":"ContainerDied","Data":"de425b69a7249fd5458c804684161b32a996773362a4c2597bae28654c7a82be"} Jan 28 15:38:20 crc kubenswrapper[4871]: I0128 15:38:20.957576 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6e17493-c4b5-417e-b5b2-42a1a245447e","Type":"ContainerStarted","Data":"c64d7edce9266ea14fea0b2fecfc7b6cb89ea9fd37eebdd3fbc4b1e31821d1d2"} Jan 28 15:38:20 crc kubenswrapper[4871]: I0128 15:38:20.957635 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6e17493-c4b5-417e-b5b2-42a1a245447e","Type":"ContainerStarted","Data":"54b66e4bbb377371e77393c666c55b6d183e0f067df31df6c1109881bdf96644"} Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.284122 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=22.037697054 podStartE2EDuration="44.284099769s" podCreationTimestamp="2026-01-28 15:37:37 +0000 UTC" firstStartedPulling="2026-01-28 15:37:56.347948492 +0000 UTC m=+1228.243786814" lastFinishedPulling="2026-01-28 15:38:18.594351197 +0000 UTC m=+1250.490189529" observedRunningTime="2026-01-28 15:38:21.018814256 +0000 UTC m=+1252.914652668" watchObservedRunningTime="2026-01-28 15:38:21.284099769 +0000 UTC m=+1253.179938091" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.291184 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-7r2wh"] Jan 28 15:38:21 crc kubenswrapper[4871]: E0128 15:38:21.291611 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="795d6166-93a3-4371-a502-b18f09b9374f" containerName="mariadb-account-create-update" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.291631 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="795d6166-93a3-4371-a502-b18f09b9374f" containerName="mariadb-account-create-update" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.291858 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="795d6166-93a3-4371-a502-b18f09b9374f" containerName="mariadb-account-create-update" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.292897 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.296006 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.302967 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-7r2wh"] Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.414027 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-config\") pod \"dnsmasq-dns-77585f5f8c-7r2wh\" (UID: \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\") " pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.414382 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-7r2wh\" (UID: \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\") " pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.414526 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhbt6\" (UniqueName: \"kubernetes.io/projected/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-kube-api-access-bhbt6\") pod \"dnsmasq-dns-77585f5f8c-7r2wh\" (UID: \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\") " pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.414677 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-7r2wh\" (UID: \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\") " pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.414880 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-7r2wh\" (UID: \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\") " pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.415019 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-7r2wh\" (UID: \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\") " pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.516898 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-7r2wh\" (UID: \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\") " pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.517249 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-7r2wh\" (UID: \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\") " pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.517352 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-7r2wh\" (UID: \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\") " pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.517550 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-config\") pod \"dnsmasq-dns-77585f5f8c-7r2wh\" (UID: \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\") " pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.517723 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-7r2wh\" (UID: \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\") " pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.517855 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhbt6\" (UniqueName: \"kubernetes.io/projected/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-kube-api-access-bhbt6\") pod \"dnsmasq-dns-77585f5f8c-7r2wh\" (UID: \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\") " pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.518534 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-7r2wh\" (UID: \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\") " pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.518553 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-7r2wh\" (UID: \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\") " pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.518785 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-config\") pod \"dnsmasq-dns-77585f5f8c-7r2wh\" (UID: \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\") " pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.518831 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-7r2wh\" (UID: \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\") " pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.519293 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-7r2wh\" (UID: \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\") " pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.545097 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhbt6\" (UniqueName: \"kubernetes.io/projected/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-kube-api-access-bhbt6\") pod \"dnsmasq-dns-77585f5f8c-7r2wh\" (UID: \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\") " pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.645476 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" Jan 28 15:38:21 crc kubenswrapper[4871]: I0128 15:38:21.681752 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-2s4s6" Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.129939 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-7r2wh"] Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.257665 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2s4s6-config-vk6nh" Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.434166 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a4f1ee34-adf8-4395-97d0-31eef5edde88-var-run-ovn\") pod \"a4f1ee34-adf8-4395-97d0-31eef5edde88\" (UID: \"a4f1ee34-adf8-4395-97d0-31eef5edde88\") " Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.434238 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a4f1ee34-adf8-4395-97d0-31eef5edde88-var-run\") pod \"a4f1ee34-adf8-4395-97d0-31eef5edde88\" (UID: \"a4f1ee34-adf8-4395-97d0-31eef5edde88\") " Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.434278 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a4f1ee34-adf8-4395-97d0-31eef5edde88-additional-scripts\") pod \"a4f1ee34-adf8-4395-97d0-31eef5edde88\" (UID: \"a4f1ee34-adf8-4395-97d0-31eef5edde88\") " Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.434322 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4f1ee34-adf8-4395-97d0-31eef5edde88-scripts\") pod \"a4f1ee34-adf8-4395-97d0-31eef5edde88\" (UID: \"a4f1ee34-adf8-4395-97d0-31eef5edde88\") " Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.434369 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a4f1ee34-adf8-4395-97d0-31eef5edde88-var-log-ovn\") pod \"a4f1ee34-adf8-4395-97d0-31eef5edde88\" (UID: \"a4f1ee34-adf8-4395-97d0-31eef5edde88\") " Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.434471 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wv96\" (UniqueName: \"kubernetes.io/projected/a4f1ee34-adf8-4395-97d0-31eef5edde88-kube-api-access-8wv96\") pod \"a4f1ee34-adf8-4395-97d0-31eef5edde88\" (UID: \"a4f1ee34-adf8-4395-97d0-31eef5edde88\") " Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.435248 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4f1ee34-adf8-4395-97d0-31eef5edde88-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "a4f1ee34-adf8-4395-97d0-31eef5edde88" (UID: "a4f1ee34-adf8-4395-97d0-31eef5edde88"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.435304 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4f1ee34-adf8-4395-97d0-31eef5edde88-var-run" (OuterVolumeSpecName: "var-run") pod "a4f1ee34-adf8-4395-97d0-31eef5edde88" (UID: "a4f1ee34-adf8-4395-97d0-31eef5edde88"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.435527 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4f1ee34-adf8-4395-97d0-31eef5edde88-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "a4f1ee34-adf8-4395-97d0-31eef5edde88" (UID: "a4f1ee34-adf8-4395-97d0-31eef5edde88"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.436005 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4f1ee34-adf8-4395-97d0-31eef5edde88-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "a4f1ee34-adf8-4395-97d0-31eef5edde88" (UID: "a4f1ee34-adf8-4395-97d0-31eef5edde88"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.436387 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4f1ee34-adf8-4395-97d0-31eef5edde88-scripts" (OuterVolumeSpecName: "scripts") pod "a4f1ee34-adf8-4395-97d0-31eef5edde88" (UID: "a4f1ee34-adf8-4395-97d0-31eef5edde88"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.438219 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f1ee34-adf8-4395-97d0-31eef5edde88-kube-api-access-8wv96" (OuterVolumeSpecName: "kube-api-access-8wv96") pod "a4f1ee34-adf8-4395-97d0-31eef5edde88" (UID: "a4f1ee34-adf8-4395-97d0-31eef5edde88"). InnerVolumeSpecName "kube-api-access-8wv96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.536638 4871 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a4f1ee34-adf8-4395-97d0-31eef5edde88-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.536674 4871 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a4f1ee34-adf8-4395-97d0-31eef5edde88-var-run\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.536686 4871 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a4f1ee34-adf8-4395-97d0-31eef5edde88-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.536698 4871 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4f1ee34-adf8-4395-97d0-31eef5edde88-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.536710 4871 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a4f1ee34-adf8-4395-97d0-31eef5edde88-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.536722 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wv96\" (UniqueName: \"kubernetes.io/projected/a4f1ee34-adf8-4395-97d0-31eef5edde88-kube-api-access-8wv96\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.983846 4871 generic.go:334] "Generic (PLEG): container finished" podID="7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c" containerID="f625a85135c97429a429d9dd99406c08f512631b7e6a0cf8de4a9a58a0ffe6a6" exitCode=0 Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.983969 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" event={"ID":"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c","Type":"ContainerDied","Data":"f625a85135c97429a429d9dd99406c08f512631b7e6a0cf8de4a9a58a0ffe6a6"} Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.984474 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" event={"ID":"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c","Type":"ContainerStarted","Data":"b87b8593b08649beb698a3727d99dae1370b1930a1435551ca9fca56a59596e3"} Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.986731 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2s4s6-config-vk6nh" event={"ID":"a4f1ee34-adf8-4395-97d0-31eef5edde88","Type":"ContainerDied","Data":"c30c9f37e89dfad2efae7130a42c84261d32dd7168498e6386208faf36a4edac"} Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.986766 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c30c9f37e89dfad2efae7130a42c84261d32dd7168498e6386208faf36a4edac" Jan 28 15:38:22 crc kubenswrapper[4871]: I0128 15:38:22.986843 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2s4s6-config-vk6nh" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.032788 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.368948 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-pv7m2"] Jan 28 15:38:23 crc kubenswrapper[4871]: E0128 15:38:23.376352 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4f1ee34-adf8-4395-97d0-31eef5edde88" containerName="ovn-config" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.376385 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4f1ee34-adf8-4395-97d0-31eef5edde88" containerName="ovn-config" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.376573 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4f1ee34-adf8-4395-97d0-31eef5edde88" containerName="ovn-config" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.377131 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pv7m2" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.381676 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9205-account-create-update-cdfq6"] Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.383890 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9205-account-create-update-cdfq6" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.396191 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-pv7m2"] Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.396456 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.406105 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9205-account-create-update-cdfq6"] Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.424638 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-2s4s6-config-vk6nh"] Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.433048 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-2s4s6-config-vk6nh"] Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.441156 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-mtkgx"] Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.442183 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mtkgx" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.446778 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-mtkgx"] Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.460722 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.463683 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b276b666-fe5b-4c85-8c89-3854e2bdbfc3-operator-scripts\") pod \"cinder-9205-account-create-update-cdfq6\" (UID: \"b276b666-fe5b-4c85-8c89-3854e2bdbfc3\") " pod="openstack/cinder-9205-account-create-update-cdfq6" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.463811 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkn6s\" (UniqueName: \"kubernetes.io/projected/b276b666-fe5b-4c85-8c89-3854e2bdbfc3-kube-api-access-gkn6s\") pod \"cinder-9205-account-create-update-cdfq6\" (UID: \"b276b666-fe5b-4c85-8c89-3854e2bdbfc3\") " pod="openstack/cinder-9205-account-create-update-cdfq6" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.463869 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b189b010-97b8-43c7-8021-05cb6f277f13-operator-scripts\") pod \"cinder-db-create-pv7m2\" (UID: \"b189b010-97b8-43c7-8021-05cb6f277f13\") " pod="openstack/cinder-db-create-pv7m2" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.463897 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpfp4\" (UniqueName: \"kubernetes.io/projected/b189b010-97b8-43c7-8021-05cb6f277f13-kube-api-access-hpfp4\") pod \"cinder-db-create-pv7m2\" (UID: \"b189b010-97b8-43c7-8021-05cb6f277f13\") " pod="openstack/cinder-db-create-pv7m2" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.566682 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9ac2bf-e1f3-47de-8c33-14098a2217e4-operator-scripts\") pod \"barbican-db-create-mtkgx\" (UID: \"1c9ac2bf-e1f3-47de-8c33-14098a2217e4\") " pod="openstack/barbican-db-create-mtkgx" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.567027 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkn6s\" (UniqueName: \"kubernetes.io/projected/b276b666-fe5b-4c85-8c89-3854e2bdbfc3-kube-api-access-gkn6s\") pod \"cinder-9205-account-create-update-cdfq6\" (UID: \"b276b666-fe5b-4c85-8c89-3854e2bdbfc3\") " pod="openstack/cinder-9205-account-create-update-cdfq6" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.567719 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b189b010-97b8-43c7-8021-05cb6f277f13-operator-scripts\") pod \"cinder-db-create-pv7m2\" (UID: \"b189b010-97b8-43c7-8021-05cb6f277f13\") " pod="openstack/cinder-db-create-pv7m2" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.567088 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b189b010-97b8-43c7-8021-05cb6f277f13-operator-scripts\") pod \"cinder-db-create-pv7m2\" (UID: \"b189b010-97b8-43c7-8021-05cb6f277f13\") " pod="openstack/cinder-db-create-pv7m2" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.568957 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpfp4\" (UniqueName: \"kubernetes.io/projected/b189b010-97b8-43c7-8021-05cb6f277f13-kube-api-access-hpfp4\") pod \"cinder-db-create-pv7m2\" (UID: \"b189b010-97b8-43c7-8021-05cb6f277f13\") " pod="openstack/cinder-db-create-pv7m2" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.568995 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b276b666-fe5b-4c85-8c89-3854e2bdbfc3-operator-scripts\") pod \"cinder-9205-account-create-update-cdfq6\" (UID: \"b276b666-fe5b-4c85-8c89-3854e2bdbfc3\") " pod="openstack/cinder-9205-account-create-update-cdfq6" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.569099 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p49hl\" (UniqueName: \"kubernetes.io/projected/1c9ac2bf-e1f3-47de-8c33-14098a2217e4-kube-api-access-p49hl\") pod \"barbican-db-create-mtkgx\" (UID: \"1c9ac2bf-e1f3-47de-8c33-14098a2217e4\") " pod="openstack/barbican-db-create-mtkgx" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.570300 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b276b666-fe5b-4c85-8c89-3854e2bdbfc3-operator-scripts\") pod \"cinder-9205-account-create-update-cdfq6\" (UID: \"b276b666-fe5b-4c85-8c89-3854e2bdbfc3\") " pod="openstack/cinder-9205-account-create-update-cdfq6" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.571468 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-eacc-account-create-update-lc94g"] Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.572446 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-eacc-account-create-update-lc94g" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.577573 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.586325 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-eacc-account-create-update-lc94g"] Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.602822 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkn6s\" (UniqueName: \"kubernetes.io/projected/b276b666-fe5b-4c85-8c89-3854e2bdbfc3-kube-api-access-gkn6s\") pod \"cinder-9205-account-create-update-cdfq6\" (UID: \"b276b666-fe5b-4c85-8c89-3854e2bdbfc3\") " pod="openstack/cinder-9205-account-create-update-cdfq6" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.615122 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpfp4\" (UniqueName: \"kubernetes.io/projected/b189b010-97b8-43c7-8021-05cb6f277f13-kube-api-access-hpfp4\") pod \"cinder-db-create-pv7m2\" (UID: \"b189b010-97b8-43c7-8021-05cb6f277f13\") " pod="openstack/cinder-db-create-pv7m2" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.675812 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e-operator-scripts\") pod \"barbican-eacc-account-create-update-lc94g\" (UID: \"a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e\") " pod="openstack/barbican-eacc-account-create-update-lc94g" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.675893 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxx7w\" (UniqueName: \"kubernetes.io/projected/a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e-kube-api-access-bxx7w\") pod \"barbican-eacc-account-create-update-lc94g\" (UID: \"a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e\") " pod="openstack/barbican-eacc-account-create-update-lc94g" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.675930 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p49hl\" (UniqueName: \"kubernetes.io/projected/1c9ac2bf-e1f3-47de-8c33-14098a2217e4-kube-api-access-p49hl\") pod \"barbican-db-create-mtkgx\" (UID: \"1c9ac2bf-e1f3-47de-8c33-14098a2217e4\") " pod="openstack/barbican-db-create-mtkgx" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.675970 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9ac2bf-e1f3-47de-8c33-14098a2217e4-operator-scripts\") pod \"barbican-db-create-mtkgx\" (UID: \"1c9ac2bf-e1f3-47de-8c33-14098a2217e4\") " pod="openstack/barbican-db-create-mtkgx" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.677771 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9ac2bf-e1f3-47de-8c33-14098a2217e4-operator-scripts\") pod \"barbican-db-create-mtkgx\" (UID: \"1c9ac2bf-e1f3-47de-8c33-14098a2217e4\") " pod="openstack/barbican-db-create-mtkgx" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.682287 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7c27-account-create-update-4vvzx"] Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.684665 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c27-account-create-update-4vvzx" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.691056 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.691176 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c27-account-create-update-4vvzx"] Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.694945 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p49hl\" (UniqueName: \"kubernetes.io/projected/1c9ac2bf-e1f3-47de-8c33-14098a2217e4-kube-api-access-p49hl\") pod \"barbican-db-create-mtkgx\" (UID: \"1c9ac2bf-e1f3-47de-8c33-14098a2217e4\") " pod="openstack/barbican-db-create-mtkgx" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.724224 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pv7m2" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.732351 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9205-account-create-update-cdfq6" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.763654 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-bfzvn"] Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.764820 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bfzvn" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.775902 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mtkgx" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.777095 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhm7d\" (UniqueName: \"kubernetes.io/projected/7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af-kube-api-access-jhm7d\") pod \"neutron-7c27-account-create-update-4vvzx\" (UID: \"7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af\") " pod="openstack/neutron-7c27-account-create-update-4vvzx" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.777177 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e-operator-scripts\") pod \"barbican-eacc-account-create-update-lc94g\" (UID: \"a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e\") " pod="openstack/barbican-eacc-account-create-update-lc94g" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.777217 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af-operator-scripts\") pod \"neutron-7c27-account-create-update-4vvzx\" (UID: \"7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af\") " pod="openstack/neutron-7c27-account-create-update-4vvzx" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.777251 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxx7w\" (UniqueName: \"kubernetes.io/projected/a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e-kube-api-access-bxx7w\") pod \"barbican-eacc-account-create-update-lc94g\" (UID: \"a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e\") " pod="openstack/barbican-eacc-account-create-update-lc94g" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.778185 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e-operator-scripts\") pod \"barbican-eacc-account-create-update-lc94g\" (UID: \"a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e\") " pod="openstack/barbican-eacc-account-create-update-lc94g" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.778453 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bfzvn"] Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.803224 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxx7w\" (UniqueName: \"kubernetes.io/projected/a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e-kube-api-access-bxx7w\") pod \"barbican-eacc-account-create-update-lc94g\" (UID: \"a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e\") " pod="openstack/barbican-eacc-account-create-update-lc94g" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.878315 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bbb2fd8-2668-47de-b3cd-94371c8baa8d-operator-scripts\") pod \"neutron-db-create-bfzvn\" (UID: \"4bbb2fd8-2668-47de-b3cd-94371c8baa8d\") " pod="openstack/neutron-db-create-bfzvn" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.878393 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhm7d\" (UniqueName: \"kubernetes.io/projected/7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af-kube-api-access-jhm7d\") pod \"neutron-7c27-account-create-update-4vvzx\" (UID: \"7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af\") " pod="openstack/neutron-7c27-account-create-update-4vvzx" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.878712 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xltk9\" (UniqueName: \"kubernetes.io/projected/4bbb2fd8-2668-47de-b3cd-94371c8baa8d-kube-api-access-xltk9\") pod \"neutron-db-create-bfzvn\" (UID: \"4bbb2fd8-2668-47de-b3cd-94371c8baa8d\") " pod="openstack/neutron-db-create-bfzvn" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.878782 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af-operator-scripts\") pod \"neutron-7c27-account-create-update-4vvzx\" (UID: \"7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af\") " pod="openstack/neutron-7c27-account-create-update-4vvzx" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.879734 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af-operator-scripts\") pod \"neutron-7c27-account-create-update-4vvzx\" (UID: \"7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af\") " pod="openstack/neutron-7c27-account-create-update-4vvzx" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.896529 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-eacc-account-create-update-lc94g" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.899143 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhm7d\" (UniqueName: \"kubernetes.io/projected/7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af-kube-api-access-jhm7d\") pod \"neutron-7c27-account-create-update-4vvzx\" (UID: \"7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af\") " pod="openstack/neutron-7c27-account-create-update-4vvzx" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.900837 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c27-account-create-update-4vvzx" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.980690 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bbb2fd8-2668-47de-b3cd-94371c8baa8d-operator-scripts\") pod \"neutron-db-create-bfzvn\" (UID: \"4bbb2fd8-2668-47de-b3cd-94371c8baa8d\") " pod="openstack/neutron-db-create-bfzvn" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.980803 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xltk9\" (UniqueName: \"kubernetes.io/projected/4bbb2fd8-2668-47de-b3cd-94371c8baa8d-kube-api-access-xltk9\") pod \"neutron-db-create-bfzvn\" (UID: \"4bbb2fd8-2668-47de-b3cd-94371c8baa8d\") " pod="openstack/neutron-db-create-bfzvn" Jan 28 15:38:23 crc kubenswrapper[4871]: I0128 15:38:23.981691 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bbb2fd8-2668-47de-b3cd-94371c8baa8d-operator-scripts\") pod \"neutron-db-create-bfzvn\" (UID: \"4bbb2fd8-2668-47de-b3cd-94371c8baa8d\") " pod="openstack/neutron-db-create-bfzvn" Jan 28 15:38:24 crc kubenswrapper[4871]: I0128 15:38:24.032810 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xltk9\" (UniqueName: \"kubernetes.io/projected/4bbb2fd8-2668-47de-b3cd-94371c8baa8d-kube-api-access-xltk9\") pod \"neutron-db-create-bfzvn\" (UID: \"4bbb2fd8-2668-47de-b3cd-94371c8baa8d\") " pod="openstack/neutron-db-create-bfzvn" Jan 28 15:38:24 crc kubenswrapper[4871]: I0128 15:38:24.039750 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" event={"ID":"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c","Type":"ContainerStarted","Data":"f361f286eee27485964408f49fd3893cc3e54a1ebc2010a28d4c8f0e133a5a5e"} Jan 28 15:38:24 crc kubenswrapper[4871]: I0128 15:38:24.040063 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" Jan 28 15:38:24 crc kubenswrapper[4871]: I0128 15:38:24.215041 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bfzvn" Jan 28 15:38:24 crc kubenswrapper[4871]: I0128 15:38:24.228021 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" podStartSLOduration=3.228004683 podStartE2EDuration="3.228004683s" podCreationTimestamp="2026-01-28 15:38:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:38:24.069642588 +0000 UTC m=+1255.965480920" watchObservedRunningTime="2026-01-28 15:38:24.228004683 +0000 UTC m=+1256.123843005" Jan 28 15:38:24 crc kubenswrapper[4871]: I0128 15:38:24.240659 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-pv7m2"] Jan 28 15:38:24 crc kubenswrapper[4871]: W0128 15:38:24.243602 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb189b010_97b8_43c7_8021_05cb6f277f13.slice/crio-b1b04fd7bcac00f9b9374ef054d86a865cd6753f2d63a420d5ac20dec3a2aa22 WatchSource:0}: Error finding container b1b04fd7bcac00f9b9374ef054d86a865cd6753f2d63a420d5ac20dec3a2aa22: Status 404 returned error can't find the container with id b1b04fd7bcac00f9b9374ef054d86a865cd6753f2d63a420d5ac20dec3a2aa22 Jan 28 15:38:24 crc kubenswrapper[4871]: I0128 15:38:24.342941 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9205-account-create-update-cdfq6"] Jan 28 15:38:24 crc kubenswrapper[4871]: W0128 15:38:24.357507 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb276b666_fe5b_4c85_8c89_3854e2bdbfc3.slice/crio-a46760f231fc321b9fc4c142932f4f96af2919cd356731fb532704bd2e0ebc4d WatchSource:0}: Error finding container a46760f231fc321b9fc4c142932f4f96af2919cd356731fb532704bd2e0ebc4d: Status 404 returned error can't find the container with id a46760f231fc321b9fc4c142932f4f96af2919cd356731fb532704bd2e0ebc4d Jan 28 15:38:24 crc kubenswrapper[4871]: I0128 15:38:24.406050 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-mtkgx"] Jan 28 15:38:24 crc kubenswrapper[4871]: I0128 15:38:24.490082 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c27-account-create-update-4vvzx"] Jan 28 15:38:24 crc kubenswrapper[4871]: I0128 15:38:24.497729 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-eacc-account-create-update-lc94g"] Jan 28 15:38:24 crc kubenswrapper[4871]: W0128 15:38:24.500747 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ec4cd59_230f_4dc8_b2bd_f44d4f6cd1af.slice/crio-1b95d53e2a52bb364d3d6adf780f3598140d10a3859c9654d3682a325eb7bd2b WatchSource:0}: Error finding container 1b95d53e2a52bb364d3d6adf780f3598140d10a3859c9654d3682a325eb7bd2b: Status 404 returned error can't find the container with id 1b95d53e2a52bb364d3d6adf780f3598140d10a3859c9654d3682a325eb7bd2b Jan 28 15:38:24 crc kubenswrapper[4871]: W0128 15:38:24.501128 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9be2fd9_a59f_43ef_bf12_58c6d3bcf07e.slice/crio-0d665cf66ea5e8ebf654ddafadd8fd332815b1da05f601e22717afb01d43f99c WatchSource:0}: Error finding container 0d665cf66ea5e8ebf654ddafadd8fd332815b1da05f601e22717afb01d43f99c: Status 404 returned error can't find the container with id 0d665cf66ea5e8ebf654ddafadd8fd332815b1da05f601e22717afb01d43f99c Jan 28 15:38:24 crc kubenswrapper[4871]: I0128 15:38:24.729058 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bfzvn"] Jan 28 15:38:24 crc kubenswrapper[4871]: W0128 15:38:24.738417 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bbb2fd8_2668_47de_b3cd_94371c8baa8d.slice/crio-907bde07606ada14669e4d5fcd16e44c4538dc5f2f8a167bc12e3dc623c1480b WatchSource:0}: Error finding container 907bde07606ada14669e4d5fcd16e44c4538dc5f2f8a167bc12e3dc623c1480b: Status 404 returned error can't find the container with id 907bde07606ada14669e4d5fcd16e44c4538dc5f2f8a167bc12e3dc623c1480b Jan 28 15:38:24 crc kubenswrapper[4871]: I0128 15:38:24.914303 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4f1ee34-adf8-4395-97d0-31eef5edde88" path="/var/lib/kubelet/pods/a4f1ee34-adf8-4395-97d0-31eef5edde88/volumes" Jan 28 15:38:25 crc kubenswrapper[4871]: I0128 15:38:25.048128 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c27-account-create-update-4vvzx" event={"ID":"7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af","Type":"ContainerStarted","Data":"25f014d0df7c1bc14135cf91e1ff4ac240e161ecffdd633482bb37e5ac65f583"} Jan 28 15:38:25 crc kubenswrapper[4871]: I0128 15:38:25.048180 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c27-account-create-update-4vvzx" event={"ID":"7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af","Type":"ContainerStarted","Data":"1b95d53e2a52bb364d3d6adf780f3598140d10a3859c9654d3682a325eb7bd2b"} Jan 28 15:38:25 crc kubenswrapper[4871]: I0128 15:38:25.050389 4871 generic.go:334] "Generic (PLEG): container finished" podID="b189b010-97b8-43c7-8021-05cb6f277f13" containerID="73de87bb082df8e7d269f1357755647880224adb3d3dc9afae13df19a33b76dc" exitCode=0 Jan 28 15:38:25 crc kubenswrapper[4871]: I0128 15:38:25.050458 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pv7m2" event={"ID":"b189b010-97b8-43c7-8021-05cb6f277f13","Type":"ContainerDied","Data":"73de87bb082df8e7d269f1357755647880224adb3d3dc9afae13df19a33b76dc"} Jan 28 15:38:25 crc kubenswrapper[4871]: I0128 15:38:25.050483 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pv7m2" event={"ID":"b189b010-97b8-43c7-8021-05cb6f277f13","Type":"ContainerStarted","Data":"b1b04fd7bcac00f9b9374ef054d86a865cd6753f2d63a420d5ac20dec3a2aa22"} Jan 28 15:38:25 crc kubenswrapper[4871]: I0128 15:38:25.052925 4871 generic.go:334] "Generic (PLEG): container finished" podID="b276b666-fe5b-4c85-8c89-3854e2bdbfc3" containerID="cbece9d0f209df3d64601d227690cf1ca9461182b86d1d7ad94ebb55e395612d" exitCode=0 Jan 28 15:38:25 crc kubenswrapper[4871]: I0128 15:38:25.053001 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9205-account-create-update-cdfq6" event={"ID":"b276b666-fe5b-4c85-8c89-3854e2bdbfc3","Type":"ContainerDied","Data":"cbece9d0f209df3d64601d227690cf1ca9461182b86d1d7ad94ebb55e395612d"} Jan 28 15:38:25 crc kubenswrapper[4871]: I0128 15:38:25.053023 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9205-account-create-update-cdfq6" event={"ID":"b276b666-fe5b-4c85-8c89-3854e2bdbfc3","Type":"ContainerStarted","Data":"a46760f231fc321b9fc4c142932f4f96af2919cd356731fb532704bd2e0ebc4d"} Jan 28 15:38:25 crc kubenswrapper[4871]: I0128 15:38:25.055169 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bfzvn" event={"ID":"4bbb2fd8-2668-47de-b3cd-94371c8baa8d","Type":"ContainerStarted","Data":"907bde07606ada14669e4d5fcd16e44c4538dc5f2f8a167bc12e3dc623c1480b"} Jan 28 15:38:25 crc kubenswrapper[4871]: I0128 15:38:25.057825 4871 generic.go:334] "Generic (PLEG): container finished" podID="1c9ac2bf-e1f3-47de-8c33-14098a2217e4" containerID="7c2d90c2179017626d8a8e94f80887a005deb3ff2e0f1ae7a74ae38d1ef9494f" exitCode=0 Jan 28 15:38:25 crc kubenswrapper[4871]: I0128 15:38:25.058060 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mtkgx" event={"ID":"1c9ac2bf-e1f3-47de-8c33-14098a2217e4","Type":"ContainerDied","Data":"7c2d90c2179017626d8a8e94f80887a005deb3ff2e0f1ae7a74ae38d1ef9494f"} Jan 28 15:38:25 crc kubenswrapper[4871]: I0128 15:38:25.058151 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mtkgx" event={"ID":"1c9ac2bf-e1f3-47de-8c33-14098a2217e4","Type":"ContainerStarted","Data":"cb3d910e294e963db0042335669302ea60df899d970abb911f97b939d2ea7556"} Jan 28 15:38:25 crc kubenswrapper[4871]: I0128 15:38:25.059986 4871 generic.go:334] "Generic (PLEG): container finished" podID="a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e" containerID="abf32b777bfe9cec1c7a7c854167e55fe4451d16dc7135cec16ad0a19466c771" exitCode=0 Jan 28 15:38:25 crc kubenswrapper[4871]: I0128 15:38:25.060077 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-eacc-account-create-update-lc94g" event={"ID":"a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e","Type":"ContainerDied","Data":"abf32b777bfe9cec1c7a7c854167e55fe4451d16dc7135cec16ad0a19466c771"} Jan 28 15:38:25 crc kubenswrapper[4871]: I0128 15:38:25.060175 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-eacc-account-create-update-lc94g" event={"ID":"a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e","Type":"ContainerStarted","Data":"0d665cf66ea5e8ebf654ddafadd8fd332815b1da05f601e22717afb01d43f99c"} Jan 28 15:38:25 crc kubenswrapper[4871]: I0128 15:38:25.069096 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7c27-account-create-update-4vvzx" podStartSLOduration=2.069077564 podStartE2EDuration="2.069077564s" podCreationTimestamp="2026-01-28 15:38:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:38:25.066129441 +0000 UTC m=+1256.961967763" watchObservedRunningTime="2026-01-28 15:38:25.069077564 +0000 UTC m=+1256.964915886" Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.069559 4871 generic.go:334] "Generic (PLEG): container finished" podID="7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af" containerID="25f014d0df7c1bc14135cf91e1ff4ac240e161ecffdd633482bb37e5ac65f583" exitCode=0 Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.069630 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c27-account-create-update-4vvzx" event={"ID":"7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af","Type":"ContainerDied","Data":"25f014d0df7c1bc14135cf91e1ff4ac240e161ecffdd633482bb37e5ac65f583"} Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.071711 4871 generic.go:334] "Generic (PLEG): container finished" podID="4bbb2fd8-2668-47de-b3cd-94371c8baa8d" containerID="1c80e9c34abfefe75a00a78c48ef3cf07b099eb9d19f3088872c9eae58048805" exitCode=0 Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.071763 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bfzvn" event={"ID":"4bbb2fd8-2668-47de-b3cd-94371c8baa8d","Type":"ContainerDied","Data":"1c80e9c34abfefe75a00a78c48ef3cf07b099eb9d19f3088872c9eae58048805"} Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.598995 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mtkgx" Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.632945 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9ac2bf-e1f3-47de-8c33-14098a2217e4-operator-scripts\") pod \"1c9ac2bf-e1f3-47de-8c33-14098a2217e4\" (UID: \"1c9ac2bf-e1f3-47de-8c33-14098a2217e4\") " Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.633003 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p49hl\" (UniqueName: \"kubernetes.io/projected/1c9ac2bf-e1f3-47de-8c33-14098a2217e4-kube-api-access-p49hl\") pod \"1c9ac2bf-e1f3-47de-8c33-14098a2217e4\" (UID: \"1c9ac2bf-e1f3-47de-8c33-14098a2217e4\") " Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.634808 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c9ac2bf-e1f3-47de-8c33-14098a2217e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1c9ac2bf-e1f3-47de-8c33-14098a2217e4" (UID: "1c9ac2bf-e1f3-47de-8c33-14098a2217e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.642998 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c9ac2bf-e1f3-47de-8c33-14098a2217e4-kube-api-access-p49hl" (OuterVolumeSpecName: "kube-api-access-p49hl") pod "1c9ac2bf-e1f3-47de-8c33-14098a2217e4" (UID: "1c9ac2bf-e1f3-47de-8c33-14098a2217e4"). InnerVolumeSpecName "kube-api-access-p49hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.734830 4871 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9ac2bf-e1f3-47de-8c33-14098a2217e4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.734869 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p49hl\" (UniqueName: \"kubernetes.io/projected/1c9ac2bf-e1f3-47de-8c33-14098a2217e4-kube-api-access-p49hl\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.761200 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9205-account-create-update-cdfq6" Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.770502 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-eacc-account-create-update-lc94g" Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.780435 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pv7m2" Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.946915 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b189b010-97b8-43c7-8021-05cb6f277f13-operator-scripts\") pod \"b189b010-97b8-43c7-8021-05cb6f277f13\" (UID: \"b189b010-97b8-43c7-8021-05cb6f277f13\") " Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.946960 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxx7w\" (UniqueName: \"kubernetes.io/projected/a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e-kube-api-access-bxx7w\") pod \"a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e\" (UID: \"a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e\") " Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.947010 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e-operator-scripts\") pod \"a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e\" (UID: \"a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e\") " Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.947081 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkn6s\" (UniqueName: \"kubernetes.io/projected/b276b666-fe5b-4c85-8c89-3854e2bdbfc3-kube-api-access-gkn6s\") pod \"b276b666-fe5b-4c85-8c89-3854e2bdbfc3\" (UID: \"b276b666-fe5b-4c85-8c89-3854e2bdbfc3\") " Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.947455 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b189b010-97b8-43c7-8021-05cb6f277f13-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b189b010-97b8-43c7-8021-05cb6f277f13" (UID: "b189b010-97b8-43c7-8021-05cb6f277f13"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.947731 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e" (UID: "a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.947858 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpfp4\" (UniqueName: \"kubernetes.io/projected/b189b010-97b8-43c7-8021-05cb6f277f13-kube-api-access-hpfp4\") pod \"b189b010-97b8-43c7-8021-05cb6f277f13\" (UID: \"b189b010-97b8-43c7-8021-05cb6f277f13\") " Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.947959 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b276b666-fe5b-4c85-8c89-3854e2bdbfc3-operator-scripts\") pod \"b276b666-fe5b-4c85-8c89-3854e2bdbfc3\" (UID: \"b276b666-fe5b-4c85-8c89-3854e2bdbfc3\") " Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.948699 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b276b666-fe5b-4c85-8c89-3854e2bdbfc3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b276b666-fe5b-4c85-8c89-3854e2bdbfc3" (UID: "b276b666-fe5b-4c85-8c89-3854e2bdbfc3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.949119 4871 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b276b666-fe5b-4c85-8c89-3854e2bdbfc3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.949137 4871 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b189b010-97b8-43c7-8021-05cb6f277f13-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.949150 4871 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.951280 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e-kube-api-access-bxx7w" (OuterVolumeSpecName: "kube-api-access-bxx7w") pod "a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e" (UID: "a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e"). InnerVolumeSpecName "kube-api-access-bxx7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.951423 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b276b666-fe5b-4c85-8c89-3854e2bdbfc3-kube-api-access-gkn6s" (OuterVolumeSpecName: "kube-api-access-gkn6s") pod "b276b666-fe5b-4c85-8c89-3854e2bdbfc3" (UID: "b276b666-fe5b-4c85-8c89-3854e2bdbfc3"). InnerVolumeSpecName "kube-api-access-gkn6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:26 crc kubenswrapper[4871]: I0128 15:38:26.951826 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b189b010-97b8-43c7-8021-05cb6f277f13-kube-api-access-hpfp4" (OuterVolumeSpecName: "kube-api-access-hpfp4") pod "b189b010-97b8-43c7-8021-05cb6f277f13" (UID: "b189b010-97b8-43c7-8021-05cb6f277f13"). InnerVolumeSpecName "kube-api-access-hpfp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.050443 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkn6s\" (UniqueName: \"kubernetes.io/projected/b276b666-fe5b-4c85-8c89-3854e2bdbfc3-kube-api-access-gkn6s\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.051004 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpfp4\" (UniqueName: \"kubernetes.io/projected/b189b010-97b8-43c7-8021-05cb6f277f13-kube-api-access-hpfp4\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.051090 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxx7w\" (UniqueName: \"kubernetes.io/projected/a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e-kube-api-access-bxx7w\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.084942 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pv7m2" event={"ID":"b189b010-97b8-43c7-8021-05cb6f277f13","Type":"ContainerDied","Data":"b1b04fd7bcac00f9b9374ef054d86a865cd6753f2d63a420d5ac20dec3a2aa22"} Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.084983 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1b04fd7bcac00f9b9374ef054d86a865cd6753f2d63a420d5ac20dec3a2aa22" Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.085031 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pv7m2" Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.088192 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9205-account-create-update-cdfq6" event={"ID":"b276b666-fe5b-4c85-8c89-3854e2bdbfc3","Type":"ContainerDied","Data":"a46760f231fc321b9fc4c142932f4f96af2919cd356731fb532704bd2e0ebc4d"} Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.088217 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9205-account-create-update-cdfq6" Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.088225 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a46760f231fc321b9fc4c142932f4f96af2919cd356731fb532704bd2e0ebc4d" Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.091922 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mtkgx" event={"ID":"1c9ac2bf-e1f3-47de-8c33-14098a2217e4","Type":"ContainerDied","Data":"cb3d910e294e963db0042335669302ea60df899d970abb911f97b939d2ea7556"} Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.091977 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb3d910e294e963db0042335669302ea60df899d970abb911f97b939d2ea7556" Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.092275 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mtkgx" Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.094572 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-eacc-account-create-update-lc94g" event={"ID":"a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e","Type":"ContainerDied","Data":"0d665cf66ea5e8ebf654ddafadd8fd332815b1da05f601e22717afb01d43f99c"} Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.094655 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d665cf66ea5e8ebf654ddafadd8fd332815b1da05f601e22717afb01d43f99c" Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.094666 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-eacc-account-create-update-lc94g" Jan 28 15:38:27 crc kubenswrapper[4871]: E0128 15:38:27.148998 4871 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9be2fd9_a59f_43ef_bf12_58c6d3bcf07e.slice\": RecentStats: unable to find data in memory cache]" Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.380076 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c27-account-create-update-4vvzx" Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.408364 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bfzvn" Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.458608 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af-operator-scripts\") pod \"7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af\" (UID: \"7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af\") " Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.458711 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhm7d\" (UniqueName: \"kubernetes.io/projected/7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af-kube-api-access-jhm7d\") pod \"7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af\" (UID: \"7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af\") " Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.458784 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bbb2fd8-2668-47de-b3cd-94371c8baa8d-operator-scripts\") pod \"4bbb2fd8-2668-47de-b3cd-94371c8baa8d\" (UID: \"4bbb2fd8-2668-47de-b3cd-94371c8baa8d\") " Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.458918 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xltk9\" (UniqueName: \"kubernetes.io/projected/4bbb2fd8-2668-47de-b3cd-94371c8baa8d-kube-api-access-xltk9\") pod \"4bbb2fd8-2668-47de-b3cd-94371c8baa8d\" (UID: \"4bbb2fd8-2668-47de-b3cd-94371c8baa8d\") " Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.459311 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af" (UID: "7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.459439 4871 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.460011 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bbb2fd8-2668-47de-b3cd-94371c8baa8d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4bbb2fd8-2668-47de-b3cd-94371c8baa8d" (UID: "4bbb2fd8-2668-47de-b3cd-94371c8baa8d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.464471 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bbb2fd8-2668-47de-b3cd-94371c8baa8d-kube-api-access-xltk9" (OuterVolumeSpecName: "kube-api-access-xltk9") pod "4bbb2fd8-2668-47de-b3cd-94371c8baa8d" (UID: "4bbb2fd8-2668-47de-b3cd-94371c8baa8d"). InnerVolumeSpecName "kube-api-access-xltk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.466376 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af-kube-api-access-jhm7d" (OuterVolumeSpecName: "kube-api-access-jhm7d") pod "7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af" (UID: "7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af"). InnerVolumeSpecName "kube-api-access-jhm7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.560792 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhm7d\" (UniqueName: \"kubernetes.io/projected/7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af-kube-api-access-jhm7d\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.560835 4871 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bbb2fd8-2668-47de-b3cd-94371c8baa8d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:27 crc kubenswrapper[4871]: I0128 15:38:27.560845 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xltk9\" (UniqueName: \"kubernetes.io/projected/4bbb2fd8-2668-47de-b3cd-94371c8baa8d-kube-api-access-xltk9\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:28 crc kubenswrapper[4871]: I0128 15:38:28.106897 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bfzvn" event={"ID":"4bbb2fd8-2668-47de-b3cd-94371c8baa8d","Type":"ContainerDied","Data":"907bde07606ada14669e4d5fcd16e44c4538dc5f2f8a167bc12e3dc623c1480b"} Jan 28 15:38:28 crc kubenswrapper[4871]: I0128 15:38:28.106952 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="907bde07606ada14669e4d5fcd16e44c4538dc5f2f8a167bc12e3dc623c1480b" Jan 28 15:38:28 crc kubenswrapper[4871]: I0128 15:38:28.106916 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bfzvn" Jan 28 15:38:28 crc kubenswrapper[4871]: I0128 15:38:28.109095 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c27-account-create-update-4vvzx" event={"ID":"7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af","Type":"ContainerDied","Data":"1b95d53e2a52bb364d3d6adf780f3598140d10a3859c9654d3682a325eb7bd2b"} Jan 28 15:38:28 crc kubenswrapper[4871]: I0128 15:38:28.109132 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b95d53e2a52bb364d3d6adf780f3598140d10a3859c9654d3682a325eb7bd2b" Jan 28 15:38:28 crc kubenswrapper[4871]: I0128 15:38:28.109143 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c27-account-create-update-4vvzx" Jan 28 15:38:31 crc kubenswrapper[4871]: I0128 15:38:31.647801 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" Jan 28 15:38:31 crc kubenswrapper[4871]: I0128 15:38:31.719394 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-ncpsd"] Jan 28 15:38:31 crc kubenswrapper[4871]: I0128 15:38:31.719685 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-ncpsd" podUID="25df56c1-4cb1-4018-9477-8f7b2b4f1410" containerName="dnsmasq-dns" containerID="cri-o://a907faa39ffd998edd2434303c38467a92356e4b6ccf376f4017bb2cd6a22426" gracePeriod=10 Jan 28 15:38:32 crc kubenswrapper[4871]: I0128 15:38:32.140903 4871 generic.go:334] "Generic (PLEG): container finished" podID="25df56c1-4cb1-4018-9477-8f7b2b4f1410" containerID="a907faa39ffd998edd2434303c38467a92356e4b6ccf376f4017bb2cd6a22426" exitCode=0 Jan 28 15:38:32 crc kubenswrapper[4871]: I0128 15:38:32.140945 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-ncpsd" event={"ID":"25df56c1-4cb1-4018-9477-8f7b2b4f1410","Type":"ContainerDied","Data":"a907faa39ffd998edd2434303c38467a92356e4b6ccf376f4017bb2cd6a22426"} Jan 28 15:38:32 crc kubenswrapper[4871]: I0128 15:38:32.140970 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-ncpsd" event={"ID":"25df56c1-4cb1-4018-9477-8f7b2b4f1410","Type":"ContainerDied","Data":"beddbd2d387230c0f15938fac3be563632a51d32c305d6de1c5983bc46785c8a"} Jan 28 15:38:32 crc kubenswrapper[4871]: I0128 15:38:32.140982 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beddbd2d387230c0f15938fac3be563632a51d32c305d6de1c5983bc46785c8a" Jan 28 15:38:32 crc kubenswrapper[4871]: I0128 15:38:32.199773 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-ncpsd" Jan 28 15:38:32 crc kubenswrapper[4871]: I0128 15:38:32.333386 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25df56c1-4cb1-4018-9477-8f7b2b4f1410-config\") pod \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\" (UID: \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\") " Jan 28 15:38:32 crc kubenswrapper[4871]: I0128 15:38:32.333475 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvhx5\" (UniqueName: \"kubernetes.io/projected/25df56c1-4cb1-4018-9477-8f7b2b4f1410-kube-api-access-mvhx5\") pod \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\" (UID: \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\") " Jan 28 15:38:32 crc kubenswrapper[4871]: I0128 15:38:32.333503 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25df56c1-4cb1-4018-9477-8f7b2b4f1410-dns-svc\") pod \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\" (UID: \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\") " Jan 28 15:38:32 crc kubenswrapper[4871]: I0128 15:38:32.333729 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25df56c1-4cb1-4018-9477-8f7b2b4f1410-ovsdbserver-sb\") pod \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\" (UID: \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\") " Jan 28 15:38:32 crc kubenswrapper[4871]: I0128 15:38:32.333747 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25df56c1-4cb1-4018-9477-8f7b2b4f1410-ovsdbserver-nb\") pod \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\" (UID: \"25df56c1-4cb1-4018-9477-8f7b2b4f1410\") " Jan 28 15:38:32 crc kubenswrapper[4871]: I0128 15:38:32.339757 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25df56c1-4cb1-4018-9477-8f7b2b4f1410-kube-api-access-mvhx5" (OuterVolumeSpecName: "kube-api-access-mvhx5") pod "25df56c1-4cb1-4018-9477-8f7b2b4f1410" (UID: "25df56c1-4cb1-4018-9477-8f7b2b4f1410"). InnerVolumeSpecName "kube-api-access-mvhx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:32 crc kubenswrapper[4871]: I0128 15:38:32.382860 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25df56c1-4cb1-4018-9477-8f7b2b4f1410-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "25df56c1-4cb1-4018-9477-8f7b2b4f1410" (UID: "25df56c1-4cb1-4018-9477-8f7b2b4f1410"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:32 crc kubenswrapper[4871]: I0128 15:38:32.387604 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25df56c1-4cb1-4018-9477-8f7b2b4f1410-config" (OuterVolumeSpecName: "config") pod "25df56c1-4cb1-4018-9477-8f7b2b4f1410" (UID: "25df56c1-4cb1-4018-9477-8f7b2b4f1410"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:32 crc kubenswrapper[4871]: I0128 15:38:32.393202 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25df56c1-4cb1-4018-9477-8f7b2b4f1410-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "25df56c1-4cb1-4018-9477-8f7b2b4f1410" (UID: "25df56c1-4cb1-4018-9477-8f7b2b4f1410"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:32 crc kubenswrapper[4871]: I0128 15:38:32.396916 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25df56c1-4cb1-4018-9477-8f7b2b4f1410-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "25df56c1-4cb1-4018-9477-8f7b2b4f1410" (UID: "25df56c1-4cb1-4018-9477-8f7b2b4f1410"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:32 crc kubenswrapper[4871]: I0128 15:38:32.435789 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25df56c1-4cb1-4018-9477-8f7b2b4f1410-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:32 crc kubenswrapper[4871]: I0128 15:38:32.435821 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvhx5\" (UniqueName: \"kubernetes.io/projected/25df56c1-4cb1-4018-9477-8f7b2b4f1410-kube-api-access-mvhx5\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:32 crc kubenswrapper[4871]: I0128 15:38:32.435837 4871 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25df56c1-4cb1-4018-9477-8f7b2b4f1410-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:32 crc kubenswrapper[4871]: I0128 15:38:32.435851 4871 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25df56c1-4cb1-4018-9477-8f7b2b4f1410-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:32 crc kubenswrapper[4871]: I0128 15:38:32.435861 4871 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25df56c1-4cb1-4018-9477-8f7b2b4f1410-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:33 crc kubenswrapper[4871]: I0128 15:38:33.148472 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-ncpsd" Jan 28 15:38:33 crc kubenswrapper[4871]: I0128 15:38:33.177444 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-ncpsd"] Jan 28 15:38:33 crc kubenswrapper[4871]: I0128 15:38:33.185608 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-ncpsd"] Jan 28 15:38:34 crc kubenswrapper[4871]: I0128 15:38:34.157151 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-s99pp" event={"ID":"02722604-9d90-40f1-9518-ee221fecdca0","Type":"ContainerStarted","Data":"10af46123495183e63bea233d340e710427d656264173106b6b892675db5d904"} Jan 28 15:38:34 crc kubenswrapper[4871]: I0128 15:38:34.177778 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-s99pp" podStartSLOduration=2.699668281 podStartE2EDuration="38.177756226s" podCreationTimestamp="2026-01-28 15:37:56 +0000 UTC" firstStartedPulling="2026-01-28 15:37:57.822373641 +0000 UTC m=+1229.718211973" lastFinishedPulling="2026-01-28 15:38:33.300461586 +0000 UTC m=+1265.196299918" observedRunningTime="2026-01-28 15:38:34.17311148 +0000 UTC m=+1266.068949802" watchObservedRunningTime="2026-01-28 15:38:34.177756226 +0000 UTC m=+1266.073594548" Jan 28 15:38:34 crc kubenswrapper[4871]: I0128 15:38:34.912751 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25df56c1-4cb1-4018-9477-8f7b2b4f1410" path="/var/lib/kubelet/pods/25df56c1-4cb1-4018-9477-8f7b2b4f1410/volumes" Jan 28 15:38:41 crc kubenswrapper[4871]: I0128 15:38:41.214688 4871 generic.go:334] "Generic (PLEG): container finished" podID="02722604-9d90-40f1-9518-ee221fecdca0" containerID="10af46123495183e63bea233d340e710427d656264173106b6b892675db5d904" exitCode=0 Jan 28 15:38:41 crc kubenswrapper[4871]: I0128 15:38:41.214782 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-s99pp" event={"ID":"02722604-9d90-40f1-9518-ee221fecdca0","Type":"ContainerDied","Data":"10af46123495183e63bea233d340e710427d656264173106b6b892675db5d904"} Jan 28 15:38:42 crc kubenswrapper[4871]: I0128 15:38:42.644473 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-s99pp" Jan 28 15:38:42 crc kubenswrapper[4871]: I0128 15:38:42.815255 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02722604-9d90-40f1-9518-ee221fecdca0-config-data\") pod \"02722604-9d90-40f1-9518-ee221fecdca0\" (UID: \"02722604-9d90-40f1-9518-ee221fecdca0\") " Jan 28 15:38:42 crc kubenswrapper[4871]: I0128 15:38:42.815303 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bthhn\" (UniqueName: \"kubernetes.io/projected/02722604-9d90-40f1-9518-ee221fecdca0-kube-api-access-bthhn\") pod \"02722604-9d90-40f1-9518-ee221fecdca0\" (UID: \"02722604-9d90-40f1-9518-ee221fecdca0\") " Jan 28 15:38:42 crc kubenswrapper[4871]: I0128 15:38:42.815388 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02722604-9d90-40f1-9518-ee221fecdca0-combined-ca-bundle\") pod \"02722604-9d90-40f1-9518-ee221fecdca0\" (UID: \"02722604-9d90-40f1-9518-ee221fecdca0\") " Jan 28 15:38:42 crc kubenswrapper[4871]: I0128 15:38:42.815481 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/02722604-9d90-40f1-9518-ee221fecdca0-db-sync-config-data\") pod \"02722604-9d90-40f1-9518-ee221fecdca0\" (UID: \"02722604-9d90-40f1-9518-ee221fecdca0\") " Jan 28 15:38:42 crc kubenswrapper[4871]: I0128 15:38:42.821289 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02722604-9d90-40f1-9518-ee221fecdca0-kube-api-access-bthhn" (OuterVolumeSpecName: "kube-api-access-bthhn") pod "02722604-9d90-40f1-9518-ee221fecdca0" (UID: "02722604-9d90-40f1-9518-ee221fecdca0"). InnerVolumeSpecName "kube-api-access-bthhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:42 crc kubenswrapper[4871]: I0128 15:38:42.822472 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02722604-9d90-40f1-9518-ee221fecdca0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "02722604-9d90-40f1-9518-ee221fecdca0" (UID: "02722604-9d90-40f1-9518-ee221fecdca0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:38:42 crc kubenswrapper[4871]: I0128 15:38:42.840873 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02722604-9d90-40f1-9518-ee221fecdca0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02722604-9d90-40f1-9518-ee221fecdca0" (UID: "02722604-9d90-40f1-9518-ee221fecdca0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:38:42 crc kubenswrapper[4871]: I0128 15:38:42.859443 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02722604-9d90-40f1-9518-ee221fecdca0-config-data" (OuterVolumeSpecName: "config-data") pod "02722604-9d90-40f1-9518-ee221fecdca0" (UID: "02722604-9d90-40f1-9518-ee221fecdca0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:38:42 crc kubenswrapper[4871]: I0128 15:38:42.917322 4871 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/02722604-9d90-40f1-9518-ee221fecdca0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:42 crc kubenswrapper[4871]: I0128 15:38:42.917370 4871 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02722604-9d90-40f1-9518-ee221fecdca0-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:42 crc kubenswrapper[4871]: I0128 15:38:42.917386 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bthhn\" (UniqueName: \"kubernetes.io/projected/02722604-9d90-40f1-9518-ee221fecdca0-kube-api-access-bthhn\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:42 crc kubenswrapper[4871]: I0128 15:38:42.917397 4871 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02722604-9d90-40f1-9518-ee221fecdca0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.231451 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-s99pp" event={"ID":"02722604-9d90-40f1-9518-ee221fecdca0","Type":"ContainerDied","Data":"b577da67bfa7459a8bbdfac4b0d63bc0ca17e062189263788ef43d7d584abcb4"} Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.231494 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b577da67bfa7459a8bbdfac4b0d63bc0ca17e062189263788ef43d7d584abcb4" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.231517 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-s99pp" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.622915 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-qq2x4"] Jan 28 15:38:43 crc kubenswrapper[4871]: E0128 15:38:43.623315 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bbb2fd8-2668-47de-b3cd-94371c8baa8d" containerName="mariadb-database-create" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.623332 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bbb2fd8-2668-47de-b3cd-94371c8baa8d" containerName="mariadb-database-create" Jan 28 15:38:43 crc kubenswrapper[4871]: E0128 15:38:43.623345 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b276b666-fe5b-4c85-8c89-3854e2bdbfc3" containerName="mariadb-account-create-update" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.623351 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="b276b666-fe5b-4c85-8c89-3854e2bdbfc3" containerName="mariadb-account-create-update" Jan 28 15:38:43 crc kubenswrapper[4871]: E0128 15:38:43.623364 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c9ac2bf-e1f3-47de-8c33-14098a2217e4" containerName="mariadb-database-create" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.623371 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c9ac2bf-e1f3-47de-8c33-14098a2217e4" containerName="mariadb-database-create" Jan 28 15:38:43 crc kubenswrapper[4871]: E0128 15:38:43.623380 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02722604-9d90-40f1-9518-ee221fecdca0" containerName="glance-db-sync" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.623386 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="02722604-9d90-40f1-9518-ee221fecdca0" containerName="glance-db-sync" Jan 28 15:38:43 crc kubenswrapper[4871]: E0128 15:38:43.623396 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b189b010-97b8-43c7-8021-05cb6f277f13" containerName="mariadb-database-create" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.623403 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="b189b010-97b8-43c7-8021-05cb6f277f13" containerName="mariadb-database-create" Jan 28 15:38:43 crc kubenswrapper[4871]: E0128 15:38:43.623417 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25df56c1-4cb1-4018-9477-8f7b2b4f1410" containerName="init" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.623425 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="25df56c1-4cb1-4018-9477-8f7b2b4f1410" containerName="init" Jan 28 15:38:43 crc kubenswrapper[4871]: E0128 15:38:43.623440 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af" containerName="mariadb-account-create-update" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.623450 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af" containerName="mariadb-account-create-update" Jan 28 15:38:43 crc kubenswrapper[4871]: E0128 15:38:43.623465 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e" containerName="mariadb-account-create-update" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.623472 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e" containerName="mariadb-account-create-update" Jan 28 15:38:43 crc kubenswrapper[4871]: E0128 15:38:43.623487 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25df56c1-4cb1-4018-9477-8f7b2b4f1410" containerName="dnsmasq-dns" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.623493 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="25df56c1-4cb1-4018-9477-8f7b2b4f1410" containerName="dnsmasq-dns" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.623675 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="25df56c1-4cb1-4018-9477-8f7b2b4f1410" containerName="dnsmasq-dns" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.623687 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c9ac2bf-e1f3-47de-8c33-14098a2217e4" containerName="mariadb-database-create" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.623695 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bbb2fd8-2668-47de-b3cd-94371c8baa8d" containerName="mariadb-database-create" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.623704 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e" containerName="mariadb-account-create-update" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.623715 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="b189b010-97b8-43c7-8021-05cb6f277f13" containerName="mariadb-database-create" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.623725 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="02722604-9d90-40f1-9518-ee221fecdca0" containerName="glance-db-sync" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.623736 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="b276b666-fe5b-4c85-8c89-3854e2bdbfc3" containerName="mariadb-account-create-update" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.623746 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af" containerName="mariadb-account-create-update" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.624567 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.637494 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-qq2x4"] Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.728649 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e3142133-060d-4a0d-9a66-2a58279692e3-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-qq2x4\" (UID: \"e3142133-060d-4a0d-9a66-2a58279692e3\") " pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.728738 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3142133-060d-4a0d-9a66-2a58279692e3-config\") pod \"dnsmasq-dns-7ff5475cc9-qq2x4\" (UID: \"e3142133-060d-4a0d-9a66-2a58279692e3\") " pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.728815 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3142133-060d-4a0d-9a66-2a58279692e3-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-qq2x4\" (UID: \"e3142133-060d-4a0d-9a66-2a58279692e3\") " pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.728843 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3142133-060d-4a0d-9a66-2a58279692e3-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-qq2x4\" (UID: \"e3142133-060d-4a0d-9a66-2a58279692e3\") " pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.728916 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3142133-060d-4a0d-9a66-2a58279692e3-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-qq2x4\" (UID: \"e3142133-060d-4a0d-9a66-2a58279692e3\") " pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.728942 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfmhr\" (UniqueName: \"kubernetes.io/projected/e3142133-060d-4a0d-9a66-2a58279692e3-kube-api-access-vfmhr\") pod \"dnsmasq-dns-7ff5475cc9-qq2x4\" (UID: \"e3142133-060d-4a0d-9a66-2a58279692e3\") " pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.813625 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.813693 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.813742 4871 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.814497 4871 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68a4d17dddc2aafde953ccd496bb4751f12ecdd49f81aa1c0d30f071f672c508"} pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.814561 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" containerID="cri-o://68a4d17dddc2aafde953ccd496bb4751f12ecdd49f81aa1c0d30f071f672c508" gracePeriod=600 Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.830521 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3142133-060d-4a0d-9a66-2a58279692e3-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-qq2x4\" (UID: \"e3142133-060d-4a0d-9a66-2a58279692e3\") " pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.830564 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3142133-060d-4a0d-9a66-2a58279692e3-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-qq2x4\" (UID: \"e3142133-060d-4a0d-9a66-2a58279692e3\") " pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.830651 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3142133-060d-4a0d-9a66-2a58279692e3-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-qq2x4\" (UID: \"e3142133-060d-4a0d-9a66-2a58279692e3\") " pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.830686 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfmhr\" (UniqueName: \"kubernetes.io/projected/e3142133-060d-4a0d-9a66-2a58279692e3-kube-api-access-vfmhr\") pod \"dnsmasq-dns-7ff5475cc9-qq2x4\" (UID: \"e3142133-060d-4a0d-9a66-2a58279692e3\") " pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.830725 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e3142133-060d-4a0d-9a66-2a58279692e3-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-qq2x4\" (UID: \"e3142133-060d-4a0d-9a66-2a58279692e3\") " pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.830779 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3142133-060d-4a0d-9a66-2a58279692e3-config\") pod \"dnsmasq-dns-7ff5475cc9-qq2x4\" (UID: \"e3142133-060d-4a0d-9a66-2a58279692e3\") " pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.831993 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3142133-060d-4a0d-9a66-2a58279692e3-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-qq2x4\" (UID: \"e3142133-060d-4a0d-9a66-2a58279692e3\") " pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.832129 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3142133-060d-4a0d-9a66-2a58279692e3-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-qq2x4\" (UID: \"e3142133-060d-4a0d-9a66-2a58279692e3\") " pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.832250 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3142133-060d-4a0d-9a66-2a58279692e3-config\") pod \"dnsmasq-dns-7ff5475cc9-qq2x4\" (UID: \"e3142133-060d-4a0d-9a66-2a58279692e3\") " pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.832634 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e3142133-060d-4a0d-9a66-2a58279692e3-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-qq2x4\" (UID: \"e3142133-060d-4a0d-9a66-2a58279692e3\") " pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.834948 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3142133-060d-4a0d-9a66-2a58279692e3-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-qq2x4\" (UID: \"e3142133-060d-4a0d-9a66-2a58279692e3\") " pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.858287 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfmhr\" (UniqueName: \"kubernetes.io/projected/e3142133-060d-4a0d-9a66-2a58279692e3-kube-api-access-vfmhr\") pod \"dnsmasq-dns-7ff5475cc9-qq2x4\" (UID: \"e3142133-060d-4a0d-9a66-2a58279692e3\") " pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" Jan 28 15:38:43 crc kubenswrapper[4871]: I0128 15:38:43.948058 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" Jan 28 15:38:44 crc kubenswrapper[4871]: I0128 15:38:44.241233 4871 generic.go:334] "Generic (PLEG): container finished" podID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerID="68a4d17dddc2aafde953ccd496bb4751f12ecdd49f81aa1c0d30f071f672c508" exitCode=0 Jan 28 15:38:44 crc kubenswrapper[4871]: I0128 15:38:44.241314 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerDied","Data":"68a4d17dddc2aafde953ccd496bb4751f12ecdd49f81aa1c0d30f071f672c508"} Jan 28 15:38:44 crc kubenswrapper[4871]: I0128 15:38:44.241566 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerStarted","Data":"65517cc51f93a0a3ef6adac2d141e0b3335a9afba2b2148de081b59fef76ee0c"} Jan 28 15:38:44 crc kubenswrapper[4871]: I0128 15:38:44.241600 4871 scope.go:117] "RemoveContainer" containerID="504ae14f7e055da72d55c5a96bfc70153a17e45dce0bb3e15ce3dccb6926e332" Jan 28 15:38:44 crc kubenswrapper[4871]: I0128 15:38:44.504920 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-qq2x4"] Jan 28 15:38:44 crc kubenswrapper[4871]: W0128 15:38:44.506954 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3142133_060d_4a0d_9a66_2a58279692e3.slice/crio-e8feb72321603611a03f8ede577362378f15b45dcc08a2bbcdc578990542e099 WatchSource:0}: Error finding container e8feb72321603611a03f8ede577362378f15b45dcc08a2bbcdc578990542e099: Status 404 returned error can't find the container with id e8feb72321603611a03f8ede577362378f15b45dcc08a2bbcdc578990542e099 Jan 28 15:38:45 crc kubenswrapper[4871]: I0128 15:38:45.250141 4871 generic.go:334] "Generic (PLEG): container finished" podID="e3142133-060d-4a0d-9a66-2a58279692e3" containerID="84ed9eea8c9bbe7e53f1fe69415b54e52caf93ecec9410975d5fe320b203a735" exitCode=0 Jan 28 15:38:45 crc kubenswrapper[4871]: I0128 15:38:45.250318 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" event={"ID":"e3142133-060d-4a0d-9a66-2a58279692e3","Type":"ContainerDied","Data":"84ed9eea8c9bbe7e53f1fe69415b54e52caf93ecec9410975d5fe320b203a735"} Jan 28 15:38:45 crc kubenswrapper[4871]: I0128 15:38:45.250914 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" event={"ID":"e3142133-060d-4a0d-9a66-2a58279692e3","Type":"ContainerStarted","Data":"e8feb72321603611a03f8ede577362378f15b45dcc08a2bbcdc578990542e099"} Jan 28 15:38:46 crc kubenswrapper[4871]: I0128 15:38:46.262906 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" event={"ID":"e3142133-060d-4a0d-9a66-2a58279692e3","Type":"ContainerStarted","Data":"44dd2d6da2d98e49472c969c889a334eeff0f7853c9e98f50b8fce2602f1bf5c"} Jan 28 15:38:46 crc kubenswrapper[4871]: I0128 15:38:46.263499 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" Jan 28 15:38:46 crc kubenswrapper[4871]: I0128 15:38:46.288976 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" podStartSLOduration=3.288950999 podStartE2EDuration="3.288950999s" podCreationTimestamp="2026-01-28 15:38:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:38:46.280189013 +0000 UTC m=+1278.176027355" watchObservedRunningTime="2026-01-28 15:38:46.288950999 +0000 UTC m=+1278.184789331" Jan 28 15:38:53 crc kubenswrapper[4871]: I0128 15:38:53.950967 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7ff5475cc9-qq2x4" Jan 28 15:38:54 crc kubenswrapper[4871]: I0128 15:38:54.019750 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-7r2wh"] Jan 28 15:38:54 crc kubenswrapper[4871]: I0128 15:38:54.020423 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" podUID="7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c" containerName="dnsmasq-dns" containerID="cri-o://f361f286eee27485964408f49fd3893cc3e54a1ebc2010a28d4c8f0e133a5a5e" gracePeriod=10 Jan 28 15:38:54 crc kubenswrapper[4871]: I0128 15:38:54.325827 4871 generic.go:334] "Generic (PLEG): container finished" podID="7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c" containerID="f361f286eee27485964408f49fd3893cc3e54a1ebc2010a28d4c8f0e133a5a5e" exitCode=0 Jan 28 15:38:54 crc kubenswrapper[4871]: I0128 15:38:54.325869 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" event={"ID":"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c","Type":"ContainerDied","Data":"f361f286eee27485964408f49fd3893cc3e54a1ebc2010a28d4c8f0e133a5a5e"} Jan 28 15:38:54 crc kubenswrapper[4871]: I0128 15:38:54.519559 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" Jan 28 15:38:54 crc kubenswrapper[4871]: I0128 15:38:54.545795 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-config\") pod \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\" (UID: \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\") " Jan 28 15:38:54 crc kubenswrapper[4871]: I0128 15:38:54.545880 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-dns-swift-storage-0\") pod \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\" (UID: \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\") " Jan 28 15:38:54 crc kubenswrapper[4871]: I0128 15:38:54.545988 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-ovsdbserver-nb\") pod \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\" (UID: \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\") " Jan 28 15:38:54 crc kubenswrapper[4871]: I0128 15:38:54.546035 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhbt6\" (UniqueName: \"kubernetes.io/projected/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-kube-api-access-bhbt6\") pod \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\" (UID: \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\") " Jan 28 15:38:54 crc kubenswrapper[4871]: I0128 15:38:54.546079 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-ovsdbserver-sb\") pod \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\" (UID: \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\") " Jan 28 15:38:54 crc kubenswrapper[4871]: I0128 15:38:54.546118 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-dns-svc\") pod \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\" (UID: \"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c\") " Jan 28 15:38:54 crc kubenswrapper[4871]: I0128 15:38:54.552142 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-kube-api-access-bhbt6" (OuterVolumeSpecName: "kube-api-access-bhbt6") pod "7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c" (UID: "7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c"). InnerVolumeSpecName "kube-api-access-bhbt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:54 crc kubenswrapper[4871]: I0128 15:38:54.597285 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c" (UID: "7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:54 crc kubenswrapper[4871]: I0128 15:38:54.597461 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c" (UID: "7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:54 crc kubenswrapper[4871]: I0128 15:38:54.612385 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c" (UID: "7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:54 crc kubenswrapper[4871]: I0128 15:38:54.626315 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-config" (OuterVolumeSpecName: "config") pod "7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c" (UID: "7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:54 crc kubenswrapper[4871]: I0128 15:38:54.638510 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c" (UID: "7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:54 crc kubenswrapper[4871]: I0128 15:38:54.648025 4871 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:54 crc kubenswrapper[4871]: I0128 15:38:54.648059 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhbt6\" (UniqueName: \"kubernetes.io/projected/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-kube-api-access-bhbt6\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:54 crc kubenswrapper[4871]: I0128 15:38:54.648072 4871 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:54 crc kubenswrapper[4871]: I0128 15:38:54.648083 4871 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:54 crc kubenswrapper[4871]: I0128 15:38:54.648094 4871 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:54 crc kubenswrapper[4871]: I0128 15:38:54.648104 4871 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:55 crc kubenswrapper[4871]: I0128 15:38:55.335580 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" event={"ID":"7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c","Type":"ContainerDied","Data":"b87b8593b08649beb698a3727d99dae1370b1930a1435551ca9fca56a59596e3"} Jan 28 15:38:55 crc kubenswrapper[4871]: I0128 15:38:55.335667 4871 scope.go:117] "RemoveContainer" containerID="f361f286eee27485964408f49fd3893cc3e54a1ebc2010a28d4c8f0e133a5a5e" Jan 28 15:38:55 crc kubenswrapper[4871]: I0128 15:38:55.335667 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-7r2wh" Jan 28 15:38:55 crc kubenswrapper[4871]: I0128 15:38:55.360293 4871 scope.go:117] "RemoveContainer" containerID="f625a85135c97429a429d9dd99406c08f512631b7e6a0cf8de4a9a58a0ffe6a6" Jan 28 15:38:55 crc kubenswrapper[4871]: I0128 15:38:55.363887 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-7r2wh"] Jan 28 15:38:55 crc kubenswrapper[4871]: I0128 15:38:55.373755 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-7r2wh"] Jan 28 15:38:56 crc kubenswrapper[4871]: I0128 15:38:56.916141 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c" path="/var/lib/kubelet/pods/7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c/volumes" Jan 28 15:40:47 crc kubenswrapper[4871]: I0128 15:40:47.949303 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fvtg5"] Jan 28 15:40:47 crc kubenswrapper[4871]: E0128 15:40:47.950303 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c" containerName="init" Jan 28 15:40:47 crc kubenswrapper[4871]: I0128 15:40:47.950318 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c" containerName="init" Jan 28 15:40:47 crc kubenswrapper[4871]: E0128 15:40:47.950336 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c" containerName="dnsmasq-dns" Jan 28 15:40:47 crc kubenswrapper[4871]: I0128 15:40:47.950344 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c" containerName="dnsmasq-dns" Jan 28 15:40:47 crc kubenswrapper[4871]: I0128 15:40:47.950545 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fa8b4cf-5b46-4ca4-933a-28dfa9c6286c" containerName="dnsmasq-dns" Jan 28 15:40:47 crc kubenswrapper[4871]: I0128 15:40:47.951942 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvtg5" Jan 28 15:40:47 crc kubenswrapper[4871]: I0128 15:40:47.960512 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fvtg5"] Jan 28 15:40:48 crc kubenswrapper[4871]: I0128 15:40:48.116662 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql2fx\" (UniqueName: \"kubernetes.io/projected/617da384-68c4-46a9-84ae-d369011a135d-kube-api-access-ql2fx\") pod \"redhat-operators-fvtg5\" (UID: \"617da384-68c4-46a9-84ae-d369011a135d\") " pod="openshift-marketplace/redhat-operators-fvtg5" Jan 28 15:40:48 crc kubenswrapper[4871]: I0128 15:40:48.116828 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/617da384-68c4-46a9-84ae-d369011a135d-catalog-content\") pod \"redhat-operators-fvtg5\" (UID: \"617da384-68c4-46a9-84ae-d369011a135d\") " pod="openshift-marketplace/redhat-operators-fvtg5" Jan 28 15:40:48 crc kubenswrapper[4871]: I0128 15:40:48.116861 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/617da384-68c4-46a9-84ae-d369011a135d-utilities\") pod \"redhat-operators-fvtg5\" (UID: \"617da384-68c4-46a9-84ae-d369011a135d\") " pod="openshift-marketplace/redhat-operators-fvtg5" Jan 28 15:40:48 crc kubenswrapper[4871]: I0128 15:40:48.218833 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/617da384-68c4-46a9-84ae-d369011a135d-catalog-content\") pod \"redhat-operators-fvtg5\" (UID: \"617da384-68c4-46a9-84ae-d369011a135d\") " pod="openshift-marketplace/redhat-operators-fvtg5" Jan 28 15:40:48 crc kubenswrapper[4871]: I0128 15:40:48.218898 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/617da384-68c4-46a9-84ae-d369011a135d-utilities\") pod \"redhat-operators-fvtg5\" (UID: \"617da384-68c4-46a9-84ae-d369011a135d\") " pod="openshift-marketplace/redhat-operators-fvtg5" Jan 28 15:40:48 crc kubenswrapper[4871]: I0128 15:40:48.218957 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql2fx\" (UniqueName: \"kubernetes.io/projected/617da384-68c4-46a9-84ae-d369011a135d-kube-api-access-ql2fx\") pod \"redhat-operators-fvtg5\" (UID: \"617da384-68c4-46a9-84ae-d369011a135d\") " pod="openshift-marketplace/redhat-operators-fvtg5" Jan 28 15:40:48 crc kubenswrapper[4871]: I0128 15:40:48.219744 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/617da384-68c4-46a9-84ae-d369011a135d-catalog-content\") pod \"redhat-operators-fvtg5\" (UID: \"617da384-68c4-46a9-84ae-d369011a135d\") " pod="openshift-marketplace/redhat-operators-fvtg5" Jan 28 15:40:48 crc kubenswrapper[4871]: I0128 15:40:48.219980 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/617da384-68c4-46a9-84ae-d369011a135d-utilities\") pod \"redhat-operators-fvtg5\" (UID: \"617da384-68c4-46a9-84ae-d369011a135d\") " pod="openshift-marketplace/redhat-operators-fvtg5" Jan 28 15:40:48 crc kubenswrapper[4871]: I0128 15:40:48.239493 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql2fx\" (UniqueName: \"kubernetes.io/projected/617da384-68c4-46a9-84ae-d369011a135d-kube-api-access-ql2fx\") pod \"redhat-operators-fvtg5\" (UID: \"617da384-68c4-46a9-84ae-d369011a135d\") " pod="openshift-marketplace/redhat-operators-fvtg5" Jan 28 15:40:48 crc kubenswrapper[4871]: I0128 15:40:48.270638 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvtg5" Jan 28 15:40:48 crc kubenswrapper[4871]: I0128 15:40:48.763314 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fvtg5"] Jan 28 15:40:49 crc kubenswrapper[4871]: I0128 15:40:49.265877 4871 generic.go:334] "Generic (PLEG): container finished" podID="617da384-68c4-46a9-84ae-d369011a135d" containerID="7311997c013989cb20a726c2a44455e1e775ea01bfed3a9bf045609271765383" exitCode=0 Jan 28 15:40:49 crc kubenswrapper[4871]: I0128 15:40:49.265949 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvtg5" event={"ID":"617da384-68c4-46a9-84ae-d369011a135d","Type":"ContainerDied","Data":"7311997c013989cb20a726c2a44455e1e775ea01bfed3a9bf045609271765383"} Jan 28 15:40:49 crc kubenswrapper[4871]: I0128 15:40:49.266148 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvtg5" event={"ID":"617da384-68c4-46a9-84ae-d369011a135d","Type":"ContainerStarted","Data":"1975a833aea08299ccd1d7beef2c0059348ece220390290faaa926d551e218e8"} Jan 28 15:40:49 crc kubenswrapper[4871]: I0128 15:40:49.267692 4871 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:40:50 crc kubenswrapper[4871]: I0128 15:40:50.275733 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvtg5" event={"ID":"617da384-68c4-46a9-84ae-d369011a135d","Type":"ContainerStarted","Data":"c809ae74fedeffb92a979eba80bfd26192b2d56860625a2e87be735c8567955d"} Jan 28 15:40:52 crc kubenswrapper[4871]: I0128 15:40:52.308047 4871 generic.go:334] "Generic (PLEG): container finished" podID="617da384-68c4-46a9-84ae-d369011a135d" containerID="c809ae74fedeffb92a979eba80bfd26192b2d56860625a2e87be735c8567955d" exitCode=0 Jan 28 15:40:52 crc kubenswrapper[4871]: I0128 15:40:52.308178 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvtg5" event={"ID":"617da384-68c4-46a9-84ae-d369011a135d","Type":"ContainerDied","Data":"c809ae74fedeffb92a979eba80bfd26192b2d56860625a2e87be735c8567955d"} Jan 28 15:40:54 crc kubenswrapper[4871]: I0128 15:40:54.325647 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvtg5" event={"ID":"617da384-68c4-46a9-84ae-d369011a135d","Type":"ContainerStarted","Data":"ff7a7ce728ba3395936f80fc2a35f1378364e87c4e9a89114de850fa7211d22c"} Jan 28 15:40:54 crc kubenswrapper[4871]: I0128 15:40:54.351112 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fvtg5" podStartSLOduration=2.9228299399999997 podStartE2EDuration="7.351090554s" podCreationTimestamp="2026-01-28 15:40:47 +0000 UTC" firstStartedPulling="2026-01-28 15:40:49.267429813 +0000 UTC m=+1401.163268135" lastFinishedPulling="2026-01-28 15:40:53.695690427 +0000 UTC m=+1405.591528749" observedRunningTime="2026-01-28 15:40:54.342371991 +0000 UTC m=+1406.238210323" watchObservedRunningTime="2026-01-28 15:40:54.351090554 +0000 UTC m=+1406.246928876" Jan 28 15:40:58 crc kubenswrapper[4871]: I0128 15:40:58.271238 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fvtg5" Jan 28 15:40:58 crc kubenswrapper[4871]: I0128 15:40:58.271663 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fvtg5" Jan 28 15:40:59 crc kubenswrapper[4871]: I0128 15:40:59.318224 4871 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fvtg5" podUID="617da384-68c4-46a9-84ae-d369011a135d" containerName="registry-server" probeResult="failure" output=< Jan 28 15:40:59 crc kubenswrapper[4871]: timeout: failed to connect service ":50051" within 1s Jan 28 15:40:59 crc kubenswrapper[4871]: > Jan 28 15:41:08 crc kubenswrapper[4871]: I0128 15:41:08.314320 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fvtg5" Jan 28 15:41:08 crc kubenswrapper[4871]: I0128 15:41:08.361622 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fvtg5" Jan 28 15:41:08 crc kubenswrapper[4871]: I0128 15:41:08.550717 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fvtg5"] Jan 28 15:41:09 crc kubenswrapper[4871]: I0128 15:41:09.439262 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fvtg5" podUID="617da384-68c4-46a9-84ae-d369011a135d" containerName="registry-server" containerID="cri-o://ff7a7ce728ba3395936f80fc2a35f1378364e87c4e9a89114de850fa7211d22c" gracePeriod=2 Jan 28 15:41:09 crc kubenswrapper[4871]: I0128 15:41:09.887705 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvtg5" Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.081011 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/617da384-68c4-46a9-84ae-d369011a135d-catalog-content\") pod \"617da384-68c4-46a9-84ae-d369011a135d\" (UID: \"617da384-68c4-46a9-84ae-d369011a135d\") " Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.081208 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/617da384-68c4-46a9-84ae-d369011a135d-utilities\") pod \"617da384-68c4-46a9-84ae-d369011a135d\" (UID: \"617da384-68c4-46a9-84ae-d369011a135d\") " Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.081276 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql2fx\" (UniqueName: \"kubernetes.io/projected/617da384-68c4-46a9-84ae-d369011a135d-kube-api-access-ql2fx\") pod \"617da384-68c4-46a9-84ae-d369011a135d\" (UID: \"617da384-68c4-46a9-84ae-d369011a135d\") " Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.082421 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/617da384-68c4-46a9-84ae-d369011a135d-utilities" (OuterVolumeSpecName: "utilities") pod "617da384-68c4-46a9-84ae-d369011a135d" (UID: "617da384-68c4-46a9-84ae-d369011a135d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.086850 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/617da384-68c4-46a9-84ae-d369011a135d-kube-api-access-ql2fx" (OuterVolumeSpecName: "kube-api-access-ql2fx") pod "617da384-68c4-46a9-84ae-d369011a135d" (UID: "617da384-68c4-46a9-84ae-d369011a135d"). InnerVolumeSpecName "kube-api-access-ql2fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.183515 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql2fx\" (UniqueName: \"kubernetes.io/projected/617da384-68c4-46a9-84ae-d369011a135d-kube-api-access-ql2fx\") on node \"crc\" DevicePath \"\"" Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.183553 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/617da384-68c4-46a9-84ae-d369011a135d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.207216 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/617da384-68c4-46a9-84ae-d369011a135d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "617da384-68c4-46a9-84ae-d369011a135d" (UID: "617da384-68c4-46a9-84ae-d369011a135d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.285387 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/617da384-68c4-46a9-84ae-d369011a135d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.448572 4871 generic.go:334] "Generic (PLEG): container finished" podID="617da384-68c4-46a9-84ae-d369011a135d" containerID="ff7a7ce728ba3395936f80fc2a35f1378364e87c4e9a89114de850fa7211d22c" exitCode=0 Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.448661 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvtg5" event={"ID":"617da384-68c4-46a9-84ae-d369011a135d","Type":"ContainerDied","Data":"ff7a7ce728ba3395936f80fc2a35f1378364e87c4e9a89114de850fa7211d22c"} Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.448693 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvtg5" event={"ID":"617da384-68c4-46a9-84ae-d369011a135d","Type":"ContainerDied","Data":"1975a833aea08299ccd1d7beef2c0059348ece220390290faaa926d551e218e8"} Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.448715 4871 scope.go:117] "RemoveContainer" containerID="ff7a7ce728ba3395936f80fc2a35f1378364e87c4e9a89114de850fa7211d22c" Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.448835 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvtg5" Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.481568 4871 scope.go:117] "RemoveContainer" containerID="c809ae74fedeffb92a979eba80bfd26192b2d56860625a2e87be735c8567955d" Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.490415 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fvtg5"] Jan 28 15:41:10 crc kubenswrapper[4871]: E0128 15:41:10.491132 4871 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod617da384_68c4_46a9_84ae_d369011a135d.slice/crio-1975a833aea08299ccd1d7beef2c0059348ece220390290faaa926d551e218e8\": RecentStats: unable to find data in memory cache]" Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.498014 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fvtg5"] Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.511333 4871 scope.go:117] "RemoveContainer" containerID="7311997c013989cb20a726c2a44455e1e775ea01bfed3a9bf045609271765383" Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.549920 4871 scope.go:117] "RemoveContainer" containerID="ff7a7ce728ba3395936f80fc2a35f1378364e87c4e9a89114de850fa7211d22c" Jan 28 15:41:10 crc kubenswrapper[4871]: E0128 15:41:10.550458 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff7a7ce728ba3395936f80fc2a35f1378364e87c4e9a89114de850fa7211d22c\": container with ID starting with ff7a7ce728ba3395936f80fc2a35f1378364e87c4e9a89114de850fa7211d22c not found: ID does not exist" containerID="ff7a7ce728ba3395936f80fc2a35f1378364e87c4e9a89114de850fa7211d22c" Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.550607 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff7a7ce728ba3395936f80fc2a35f1378364e87c4e9a89114de850fa7211d22c"} err="failed to get container status \"ff7a7ce728ba3395936f80fc2a35f1378364e87c4e9a89114de850fa7211d22c\": rpc error: code = NotFound desc = could not find container \"ff7a7ce728ba3395936f80fc2a35f1378364e87c4e9a89114de850fa7211d22c\": container with ID starting with ff7a7ce728ba3395936f80fc2a35f1378364e87c4e9a89114de850fa7211d22c not found: ID does not exist" Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.550716 4871 scope.go:117] "RemoveContainer" containerID="c809ae74fedeffb92a979eba80bfd26192b2d56860625a2e87be735c8567955d" Jan 28 15:41:10 crc kubenswrapper[4871]: E0128 15:41:10.551174 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c809ae74fedeffb92a979eba80bfd26192b2d56860625a2e87be735c8567955d\": container with ID starting with c809ae74fedeffb92a979eba80bfd26192b2d56860625a2e87be735c8567955d not found: ID does not exist" containerID="c809ae74fedeffb92a979eba80bfd26192b2d56860625a2e87be735c8567955d" Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.551227 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c809ae74fedeffb92a979eba80bfd26192b2d56860625a2e87be735c8567955d"} err="failed to get container status \"c809ae74fedeffb92a979eba80bfd26192b2d56860625a2e87be735c8567955d\": rpc error: code = NotFound desc = could not find container \"c809ae74fedeffb92a979eba80bfd26192b2d56860625a2e87be735c8567955d\": container with ID starting with c809ae74fedeffb92a979eba80bfd26192b2d56860625a2e87be735c8567955d not found: ID does not exist" Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.551263 4871 scope.go:117] "RemoveContainer" containerID="7311997c013989cb20a726c2a44455e1e775ea01bfed3a9bf045609271765383" Jan 28 15:41:10 crc kubenswrapper[4871]: E0128 15:41:10.551535 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7311997c013989cb20a726c2a44455e1e775ea01bfed3a9bf045609271765383\": container with ID starting with 7311997c013989cb20a726c2a44455e1e775ea01bfed3a9bf045609271765383 not found: ID does not exist" containerID="7311997c013989cb20a726c2a44455e1e775ea01bfed3a9bf045609271765383" Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.551664 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7311997c013989cb20a726c2a44455e1e775ea01bfed3a9bf045609271765383"} err="failed to get container status \"7311997c013989cb20a726c2a44455e1e775ea01bfed3a9bf045609271765383\": rpc error: code = NotFound desc = could not find container \"7311997c013989cb20a726c2a44455e1e775ea01bfed3a9bf045609271765383\": container with ID starting with 7311997c013989cb20a726c2a44455e1e775ea01bfed3a9bf045609271765383 not found: ID does not exist" Jan 28 15:41:10 crc kubenswrapper[4871]: I0128 15:41:10.916418 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="617da384-68c4-46a9-84ae-d369011a135d" path="/var/lib/kubelet/pods/617da384-68c4-46a9-84ae-d369011a135d/volumes" Jan 28 15:41:13 crc kubenswrapper[4871]: I0128 15:41:13.813776 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:41:13 crc kubenswrapper[4871]: I0128 15:41:13.814073 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:41:43 crc kubenswrapper[4871]: I0128 15:41:43.813500 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:41:43 crc kubenswrapper[4871]: I0128 15:41:43.814056 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:42:13 crc kubenswrapper[4871]: I0128 15:42:13.813644 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:42:13 crc kubenswrapper[4871]: I0128 15:42:13.814159 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:42:13 crc kubenswrapper[4871]: I0128 15:42:13.814198 4871 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 15:42:13 crc kubenswrapper[4871]: I0128 15:42:13.814647 4871 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"65517cc51f93a0a3ef6adac2d141e0b3335a9afba2b2148de081b59fef76ee0c"} pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:42:13 crc kubenswrapper[4871]: I0128 15:42:13.814707 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" containerID="cri-o://65517cc51f93a0a3ef6adac2d141e0b3335a9afba2b2148de081b59fef76ee0c" gracePeriod=600 Jan 28 15:42:14 crc kubenswrapper[4871]: I0128 15:42:14.015193 4871 generic.go:334] "Generic (PLEG): container finished" podID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerID="65517cc51f93a0a3ef6adac2d141e0b3335a9afba2b2148de081b59fef76ee0c" exitCode=0 Jan 28 15:42:14 crc kubenswrapper[4871]: I0128 15:42:14.015235 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerDied","Data":"65517cc51f93a0a3ef6adac2d141e0b3335a9afba2b2148de081b59fef76ee0c"} Jan 28 15:42:14 crc kubenswrapper[4871]: I0128 15:42:14.015266 4871 scope.go:117] "RemoveContainer" containerID="68a4d17dddc2aafde953ccd496bb4751f12ecdd49f81aa1c0d30f071f672c508" Jan 28 15:42:15 crc kubenswrapper[4871]: I0128 15:42:15.025587 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerStarted","Data":"7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6"} Jan 28 15:42:19 crc kubenswrapper[4871]: I0128 15:42:19.320567 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lws7d"] Jan 28 15:42:19 crc kubenswrapper[4871]: E0128 15:42:19.321400 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="617da384-68c4-46a9-84ae-d369011a135d" containerName="extract-content" Jan 28 15:42:19 crc kubenswrapper[4871]: I0128 15:42:19.321415 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="617da384-68c4-46a9-84ae-d369011a135d" containerName="extract-content" Jan 28 15:42:19 crc kubenswrapper[4871]: E0128 15:42:19.321452 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="617da384-68c4-46a9-84ae-d369011a135d" containerName="registry-server" Jan 28 15:42:19 crc kubenswrapper[4871]: I0128 15:42:19.321461 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="617da384-68c4-46a9-84ae-d369011a135d" containerName="registry-server" Jan 28 15:42:19 crc kubenswrapper[4871]: E0128 15:42:19.321479 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="617da384-68c4-46a9-84ae-d369011a135d" containerName="extract-utilities" Jan 28 15:42:19 crc kubenswrapper[4871]: I0128 15:42:19.321489 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="617da384-68c4-46a9-84ae-d369011a135d" containerName="extract-utilities" Jan 28 15:42:19 crc kubenswrapper[4871]: I0128 15:42:19.322820 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="617da384-68c4-46a9-84ae-d369011a135d" containerName="registry-server" Jan 28 15:42:19 crc kubenswrapper[4871]: I0128 15:42:19.324328 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lws7d" Jan 28 15:42:19 crc kubenswrapper[4871]: I0128 15:42:19.337887 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lws7d"] Jan 28 15:42:19 crc kubenswrapper[4871]: I0128 15:42:19.519943 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e5b6222-80c8-47dc-92f8-13a0a75160be-utilities\") pod \"redhat-marketplace-lws7d\" (UID: \"3e5b6222-80c8-47dc-92f8-13a0a75160be\") " pod="openshift-marketplace/redhat-marketplace-lws7d" Jan 28 15:42:19 crc kubenswrapper[4871]: I0128 15:42:19.520012 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4p77\" (UniqueName: \"kubernetes.io/projected/3e5b6222-80c8-47dc-92f8-13a0a75160be-kube-api-access-f4p77\") pod \"redhat-marketplace-lws7d\" (UID: \"3e5b6222-80c8-47dc-92f8-13a0a75160be\") " pod="openshift-marketplace/redhat-marketplace-lws7d" Jan 28 15:42:19 crc kubenswrapper[4871]: I0128 15:42:19.520115 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e5b6222-80c8-47dc-92f8-13a0a75160be-catalog-content\") pod \"redhat-marketplace-lws7d\" (UID: \"3e5b6222-80c8-47dc-92f8-13a0a75160be\") " pod="openshift-marketplace/redhat-marketplace-lws7d" Jan 28 15:42:19 crc kubenswrapper[4871]: I0128 15:42:19.621329 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e5b6222-80c8-47dc-92f8-13a0a75160be-utilities\") pod \"redhat-marketplace-lws7d\" (UID: \"3e5b6222-80c8-47dc-92f8-13a0a75160be\") " pod="openshift-marketplace/redhat-marketplace-lws7d" Jan 28 15:42:19 crc kubenswrapper[4871]: I0128 15:42:19.621658 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4p77\" (UniqueName: \"kubernetes.io/projected/3e5b6222-80c8-47dc-92f8-13a0a75160be-kube-api-access-f4p77\") pod \"redhat-marketplace-lws7d\" (UID: \"3e5b6222-80c8-47dc-92f8-13a0a75160be\") " pod="openshift-marketplace/redhat-marketplace-lws7d" Jan 28 15:42:19 crc kubenswrapper[4871]: I0128 15:42:19.621748 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e5b6222-80c8-47dc-92f8-13a0a75160be-catalog-content\") pod \"redhat-marketplace-lws7d\" (UID: \"3e5b6222-80c8-47dc-92f8-13a0a75160be\") " pod="openshift-marketplace/redhat-marketplace-lws7d" Jan 28 15:42:19 crc kubenswrapper[4871]: I0128 15:42:19.621772 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e5b6222-80c8-47dc-92f8-13a0a75160be-utilities\") pod \"redhat-marketplace-lws7d\" (UID: \"3e5b6222-80c8-47dc-92f8-13a0a75160be\") " pod="openshift-marketplace/redhat-marketplace-lws7d" Jan 28 15:42:19 crc kubenswrapper[4871]: I0128 15:42:19.622088 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e5b6222-80c8-47dc-92f8-13a0a75160be-catalog-content\") pod \"redhat-marketplace-lws7d\" (UID: \"3e5b6222-80c8-47dc-92f8-13a0a75160be\") " pod="openshift-marketplace/redhat-marketplace-lws7d" Jan 28 15:42:19 crc kubenswrapper[4871]: I0128 15:42:19.644579 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4p77\" (UniqueName: \"kubernetes.io/projected/3e5b6222-80c8-47dc-92f8-13a0a75160be-kube-api-access-f4p77\") pod \"redhat-marketplace-lws7d\" (UID: \"3e5b6222-80c8-47dc-92f8-13a0a75160be\") " pod="openshift-marketplace/redhat-marketplace-lws7d" Jan 28 15:42:19 crc kubenswrapper[4871]: I0128 15:42:19.658640 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lws7d" Jan 28 15:42:20 crc kubenswrapper[4871]: I0128 15:42:20.465558 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lws7d"] Jan 28 15:42:21 crc kubenswrapper[4871]: I0128 15:42:21.073239 4871 generic.go:334] "Generic (PLEG): container finished" podID="3e5b6222-80c8-47dc-92f8-13a0a75160be" containerID="081b36a67eeae765547c4f9a9351c41f60ffd8c5346cb7c8a87909c8801dec17" exitCode=0 Jan 28 15:42:21 crc kubenswrapper[4871]: I0128 15:42:21.073329 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lws7d" event={"ID":"3e5b6222-80c8-47dc-92f8-13a0a75160be","Type":"ContainerDied","Data":"081b36a67eeae765547c4f9a9351c41f60ffd8c5346cb7c8a87909c8801dec17"} Jan 28 15:42:21 crc kubenswrapper[4871]: I0128 15:42:21.073733 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lws7d" event={"ID":"3e5b6222-80c8-47dc-92f8-13a0a75160be","Type":"ContainerStarted","Data":"c64533d2f381384103c38b4e755c9b9eaf60ff659794f95287ca23ed785922df"} Jan 28 15:42:22 crc kubenswrapper[4871]: I0128 15:42:22.082818 4871 generic.go:334] "Generic (PLEG): container finished" podID="3e5b6222-80c8-47dc-92f8-13a0a75160be" containerID="189c33b330bd1d79a210e8cde657381d06fde9f0cff12ab1ea76134a57cc9f7b" exitCode=0 Jan 28 15:42:22 crc kubenswrapper[4871]: I0128 15:42:22.082864 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lws7d" event={"ID":"3e5b6222-80c8-47dc-92f8-13a0a75160be","Type":"ContainerDied","Data":"189c33b330bd1d79a210e8cde657381d06fde9f0cff12ab1ea76134a57cc9f7b"} Jan 28 15:42:23 crc kubenswrapper[4871]: I0128 15:42:23.093957 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lws7d" event={"ID":"3e5b6222-80c8-47dc-92f8-13a0a75160be","Type":"ContainerStarted","Data":"5a3d4b6c3e3044473a3fc72b8e68aa5152ee53b00cd21b5c742dfbde03c46208"} Jan 28 15:42:23 crc kubenswrapper[4871]: I0128 15:42:23.118345 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lws7d" podStartSLOduration=2.611729304 podStartE2EDuration="4.118324459s" podCreationTimestamp="2026-01-28 15:42:19 +0000 UTC" firstStartedPulling="2026-01-28 15:42:21.074657165 +0000 UTC m=+1492.970495497" lastFinishedPulling="2026-01-28 15:42:22.58125234 +0000 UTC m=+1494.477090652" observedRunningTime="2026-01-28 15:42:23.116883364 +0000 UTC m=+1495.012721686" watchObservedRunningTime="2026-01-28 15:42:23.118324459 +0000 UTC m=+1495.014162781" Jan 28 15:42:29 crc kubenswrapper[4871]: I0128 15:42:29.659340 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lws7d" Jan 28 15:42:29 crc kubenswrapper[4871]: I0128 15:42:29.659724 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lws7d" Jan 28 15:42:29 crc kubenswrapper[4871]: I0128 15:42:29.709512 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lws7d" Jan 28 15:42:30 crc kubenswrapper[4871]: I0128 15:42:30.261930 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lws7d" Jan 28 15:42:30 crc kubenswrapper[4871]: I0128 15:42:30.318329 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lws7d"] Jan 28 15:42:32 crc kubenswrapper[4871]: I0128 15:42:32.162296 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lws7d" podUID="3e5b6222-80c8-47dc-92f8-13a0a75160be" containerName="registry-server" containerID="cri-o://5a3d4b6c3e3044473a3fc72b8e68aa5152ee53b00cd21b5c742dfbde03c46208" gracePeriod=2 Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.161618 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lws7d" Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.170988 4871 generic.go:334] "Generic (PLEG): container finished" podID="3e5b6222-80c8-47dc-92f8-13a0a75160be" containerID="5a3d4b6c3e3044473a3fc72b8e68aa5152ee53b00cd21b5c742dfbde03c46208" exitCode=0 Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.171032 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lws7d" event={"ID":"3e5b6222-80c8-47dc-92f8-13a0a75160be","Type":"ContainerDied","Data":"5a3d4b6c3e3044473a3fc72b8e68aa5152ee53b00cd21b5c742dfbde03c46208"} Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.171058 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lws7d" event={"ID":"3e5b6222-80c8-47dc-92f8-13a0a75160be","Type":"ContainerDied","Data":"c64533d2f381384103c38b4e755c9b9eaf60ff659794f95287ca23ed785922df"} Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.171074 4871 scope.go:117] "RemoveContainer" containerID="5a3d4b6c3e3044473a3fc72b8e68aa5152ee53b00cd21b5c742dfbde03c46208" Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.171201 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lws7d" Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.199065 4871 scope.go:117] "RemoveContainer" containerID="189c33b330bd1d79a210e8cde657381d06fde9f0cff12ab1ea76134a57cc9f7b" Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.230560 4871 scope.go:117] "RemoveContainer" containerID="081b36a67eeae765547c4f9a9351c41f60ffd8c5346cb7c8a87909c8801dec17" Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.264160 4871 scope.go:117] "RemoveContainer" containerID="5a3d4b6c3e3044473a3fc72b8e68aa5152ee53b00cd21b5c742dfbde03c46208" Jan 28 15:42:33 crc kubenswrapper[4871]: E0128 15:42:33.264635 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a3d4b6c3e3044473a3fc72b8e68aa5152ee53b00cd21b5c742dfbde03c46208\": container with ID starting with 5a3d4b6c3e3044473a3fc72b8e68aa5152ee53b00cd21b5c742dfbde03c46208 not found: ID does not exist" containerID="5a3d4b6c3e3044473a3fc72b8e68aa5152ee53b00cd21b5c742dfbde03c46208" Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.264666 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a3d4b6c3e3044473a3fc72b8e68aa5152ee53b00cd21b5c742dfbde03c46208"} err="failed to get container status \"5a3d4b6c3e3044473a3fc72b8e68aa5152ee53b00cd21b5c742dfbde03c46208\": rpc error: code = NotFound desc = could not find container \"5a3d4b6c3e3044473a3fc72b8e68aa5152ee53b00cd21b5c742dfbde03c46208\": container with ID starting with 5a3d4b6c3e3044473a3fc72b8e68aa5152ee53b00cd21b5c742dfbde03c46208 not found: ID does not exist" Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.264689 4871 scope.go:117] "RemoveContainer" containerID="189c33b330bd1d79a210e8cde657381d06fde9f0cff12ab1ea76134a57cc9f7b" Jan 28 15:42:33 crc kubenswrapper[4871]: E0128 15:42:33.265086 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"189c33b330bd1d79a210e8cde657381d06fde9f0cff12ab1ea76134a57cc9f7b\": container with ID starting with 189c33b330bd1d79a210e8cde657381d06fde9f0cff12ab1ea76134a57cc9f7b not found: ID does not exist" containerID="189c33b330bd1d79a210e8cde657381d06fde9f0cff12ab1ea76134a57cc9f7b" Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.265126 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"189c33b330bd1d79a210e8cde657381d06fde9f0cff12ab1ea76134a57cc9f7b"} err="failed to get container status \"189c33b330bd1d79a210e8cde657381d06fde9f0cff12ab1ea76134a57cc9f7b\": rpc error: code = NotFound desc = could not find container \"189c33b330bd1d79a210e8cde657381d06fde9f0cff12ab1ea76134a57cc9f7b\": container with ID starting with 189c33b330bd1d79a210e8cde657381d06fde9f0cff12ab1ea76134a57cc9f7b not found: ID does not exist" Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.265163 4871 scope.go:117] "RemoveContainer" containerID="081b36a67eeae765547c4f9a9351c41f60ffd8c5346cb7c8a87909c8801dec17" Jan 28 15:42:33 crc kubenswrapper[4871]: E0128 15:42:33.265470 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"081b36a67eeae765547c4f9a9351c41f60ffd8c5346cb7c8a87909c8801dec17\": container with ID starting with 081b36a67eeae765547c4f9a9351c41f60ffd8c5346cb7c8a87909c8801dec17 not found: ID does not exist" containerID="081b36a67eeae765547c4f9a9351c41f60ffd8c5346cb7c8a87909c8801dec17" Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.265502 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"081b36a67eeae765547c4f9a9351c41f60ffd8c5346cb7c8a87909c8801dec17"} err="failed to get container status \"081b36a67eeae765547c4f9a9351c41f60ffd8c5346cb7c8a87909c8801dec17\": rpc error: code = NotFound desc = could not find container \"081b36a67eeae765547c4f9a9351c41f60ffd8c5346cb7c8a87909c8801dec17\": container with ID starting with 081b36a67eeae765547c4f9a9351c41f60ffd8c5346cb7c8a87909c8801dec17 not found: ID does not exist" Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.325330 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e5b6222-80c8-47dc-92f8-13a0a75160be-utilities\") pod \"3e5b6222-80c8-47dc-92f8-13a0a75160be\" (UID: \"3e5b6222-80c8-47dc-92f8-13a0a75160be\") " Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.325457 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4p77\" (UniqueName: \"kubernetes.io/projected/3e5b6222-80c8-47dc-92f8-13a0a75160be-kube-api-access-f4p77\") pod \"3e5b6222-80c8-47dc-92f8-13a0a75160be\" (UID: \"3e5b6222-80c8-47dc-92f8-13a0a75160be\") " Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.325522 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e5b6222-80c8-47dc-92f8-13a0a75160be-catalog-content\") pod \"3e5b6222-80c8-47dc-92f8-13a0a75160be\" (UID: \"3e5b6222-80c8-47dc-92f8-13a0a75160be\") " Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.326731 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e5b6222-80c8-47dc-92f8-13a0a75160be-utilities" (OuterVolumeSpecName: "utilities") pod "3e5b6222-80c8-47dc-92f8-13a0a75160be" (UID: "3e5b6222-80c8-47dc-92f8-13a0a75160be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.331253 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e5b6222-80c8-47dc-92f8-13a0a75160be-kube-api-access-f4p77" (OuterVolumeSpecName: "kube-api-access-f4p77") pod "3e5b6222-80c8-47dc-92f8-13a0a75160be" (UID: "3e5b6222-80c8-47dc-92f8-13a0a75160be"). InnerVolumeSpecName "kube-api-access-f4p77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.339470 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e5b6222-80c8-47dc-92f8-13a0a75160be-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.339506 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4p77\" (UniqueName: \"kubernetes.io/projected/3e5b6222-80c8-47dc-92f8-13a0a75160be-kube-api-access-f4p77\") on node \"crc\" DevicePath \"\"" Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.347066 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e5b6222-80c8-47dc-92f8-13a0a75160be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e5b6222-80c8-47dc-92f8-13a0a75160be" (UID: "3e5b6222-80c8-47dc-92f8-13a0a75160be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.443687 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e5b6222-80c8-47dc-92f8-13a0a75160be-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.545683 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lws7d"] Jan 28 15:42:33 crc kubenswrapper[4871]: I0128 15:42:33.553198 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lws7d"] Jan 28 15:42:34 crc kubenswrapper[4871]: I0128 15:42:34.917079 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e5b6222-80c8-47dc-92f8-13a0a75160be" path="/var/lib/kubelet/pods/3e5b6222-80c8-47dc-92f8-13a0a75160be/volumes" Jan 28 15:42:43 crc kubenswrapper[4871]: I0128 15:42:43.091818 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5jggg"] Jan 28 15:42:43 crc kubenswrapper[4871]: E0128 15:42:43.092827 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e5b6222-80c8-47dc-92f8-13a0a75160be" containerName="extract-utilities" Jan 28 15:42:43 crc kubenswrapper[4871]: I0128 15:42:43.092843 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e5b6222-80c8-47dc-92f8-13a0a75160be" containerName="extract-utilities" Jan 28 15:42:43 crc kubenswrapper[4871]: E0128 15:42:43.092854 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e5b6222-80c8-47dc-92f8-13a0a75160be" containerName="registry-server" Jan 28 15:42:43 crc kubenswrapper[4871]: I0128 15:42:43.092860 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e5b6222-80c8-47dc-92f8-13a0a75160be" containerName="registry-server" Jan 28 15:42:43 crc kubenswrapper[4871]: E0128 15:42:43.092879 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e5b6222-80c8-47dc-92f8-13a0a75160be" containerName="extract-content" Jan 28 15:42:43 crc kubenswrapper[4871]: I0128 15:42:43.092886 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e5b6222-80c8-47dc-92f8-13a0a75160be" containerName="extract-content" Jan 28 15:42:43 crc kubenswrapper[4871]: I0128 15:42:43.093057 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e5b6222-80c8-47dc-92f8-13a0a75160be" containerName="registry-server" Jan 28 15:42:43 crc kubenswrapper[4871]: I0128 15:42:43.104833 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jggg" Jan 28 15:42:43 crc kubenswrapper[4871]: I0128 15:42:43.145839 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5jggg"] Jan 28 15:42:43 crc kubenswrapper[4871]: I0128 15:42:43.193790 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c585a47-0cc0-43a3-a229-4382b6e46b4f-catalog-content\") pod \"community-operators-5jggg\" (UID: \"6c585a47-0cc0-43a3-a229-4382b6e46b4f\") " pod="openshift-marketplace/community-operators-5jggg" Jan 28 15:42:43 crc kubenswrapper[4871]: I0128 15:42:43.193910 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c585a47-0cc0-43a3-a229-4382b6e46b4f-utilities\") pod \"community-operators-5jggg\" (UID: \"6c585a47-0cc0-43a3-a229-4382b6e46b4f\") " pod="openshift-marketplace/community-operators-5jggg" Jan 28 15:42:43 crc kubenswrapper[4871]: I0128 15:42:43.194001 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crkm7\" (UniqueName: \"kubernetes.io/projected/6c585a47-0cc0-43a3-a229-4382b6e46b4f-kube-api-access-crkm7\") pod \"community-operators-5jggg\" (UID: \"6c585a47-0cc0-43a3-a229-4382b6e46b4f\") " pod="openshift-marketplace/community-operators-5jggg" Jan 28 15:42:43 crc kubenswrapper[4871]: I0128 15:42:43.294879 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c585a47-0cc0-43a3-a229-4382b6e46b4f-utilities\") pod \"community-operators-5jggg\" (UID: \"6c585a47-0cc0-43a3-a229-4382b6e46b4f\") " pod="openshift-marketplace/community-operators-5jggg" Jan 28 15:42:43 crc kubenswrapper[4871]: I0128 15:42:43.294934 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crkm7\" (UniqueName: \"kubernetes.io/projected/6c585a47-0cc0-43a3-a229-4382b6e46b4f-kube-api-access-crkm7\") pod \"community-operators-5jggg\" (UID: \"6c585a47-0cc0-43a3-a229-4382b6e46b4f\") " pod="openshift-marketplace/community-operators-5jggg" Jan 28 15:42:43 crc kubenswrapper[4871]: I0128 15:42:43.295021 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c585a47-0cc0-43a3-a229-4382b6e46b4f-catalog-content\") pod \"community-operators-5jggg\" (UID: \"6c585a47-0cc0-43a3-a229-4382b6e46b4f\") " pod="openshift-marketplace/community-operators-5jggg" Jan 28 15:42:43 crc kubenswrapper[4871]: I0128 15:42:43.295487 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c585a47-0cc0-43a3-a229-4382b6e46b4f-utilities\") pod \"community-operators-5jggg\" (UID: \"6c585a47-0cc0-43a3-a229-4382b6e46b4f\") " pod="openshift-marketplace/community-operators-5jggg" Jan 28 15:42:43 crc kubenswrapper[4871]: I0128 15:42:43.295535 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c585a47-0cc0-43a3-a229-4382b6e46b4f-catalog-content\") pod \"community-operators-5jggg\" (UID: \"6c585a47-0cc0-43a3-a229-4382b6e46b4f\") " pod="openshift-marketplace/community-operators-5jggg" Jan 28 15:42:43 crc kubenswrapper[4871]: I0128 15:42:43.322800 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crkm7\" (UniqueName: \"kubernetes.io/projected/6c585a47-0cc0-43a3-a229-4382b6e46b4f-kube-api-access-crkm7\") pod \"community-operators-5jggg\" (UID: \"6c585a47-0cc0-43a3-a229-4382b6e46b4f\") " pod="openshift-marketplace/community-operators-5jggg" Jan 28 15:42:43 crc kubenswrapper[4871]: I0128 15:42:43.449175 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jggg" Jan 28 15:42:43 crc kubenswrapper[4871]: I0128 15:42:43.973505 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5jggg"] Jan 28 15:42:44 crc kubenswrapper[4871]: I0128 15:42:44.260811 4871 generic.go:334] "Generic (PLEG): container finished" podID="6c585a47-0cc0-43a3-a229-4382b6e46b4f" containerID="18a6341f2da5696ed350311719a40cd8198056daf4db48d020d37375f1d46830" exitCode=0 Jan 28 15:42:44 crc kubenswrapper[4871]: I0128 15:42:44.260927 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jggg" event={"ID":"6c585a47-0cc0-43a3-a229-4382b6e46b4f","Type":"ContainerDied","Data":"18a6341f2da5696ed350311719a40cd8198056daf4db48d020d37375f1d46830"} Jan 28 15:42:44 crc kubenswrapper[4871]: I0128 15:42:44.262072 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jggg" event={"ID":"6c585a47-0cc0-43a3-a229-4382b6e46b4f","Type":"ContainerStarted","Data":"c664e491e687e04eeaad5268be38f75fae2b365cdc4681db35f5fdbf41a07f5b"} Jan 28 15:42:45 crc kubenswrapper[4871]: I0128 15:42:45.270840 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jggg" event={"ID":"6c585a47-0cc0-43a3-a229-4382b6e46b4f","Type":"ContainerStarted","Data":"dd3fb229d8374572a2fc948800716e0ee31ffb2092d91d33903c72e85f6b55d1"} Jan 28 15:42:46 crc kubenswrapper[4871]: I0128 15:42:46.282833 4871 generic.go:334] "Generic (PLEG): container finished" podID="6c585a47-0cc0-43a3-a229-4382b6e46b4f" containerID="dd3fb229d8374572a2fc948800716e0ee31ffb2092d91d33903c72e85f6b55d1" exitCode=0 Jan 28 15:42:46 crc kubenswrapper[4871]: I0128 15:42:46.283071 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jggg" event={"ID":"6c585a47-0cc0-43a3-a229-4382b6e46b4f","Type":"ContainerDied","Data":"dd3fb229d8374572a2fc948800716e0ee31ffb2092d91d33903c72e85f6b55d1"} Jan 28 15:42:47 crc kubenswrapper[4871]: I0128 15:42:47.291944 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jggg" event={"ID":"6c585a47-0cc0-43a3-a229-4382b6e46b4f","Type":"ContainerStarted","Data":"57dc5eecf3ca25c4a453c89157a4f13f29b9afeb4bf95ab12d60db5c658dedfe"} Jan 28 15:42:47 crc kubenswrapper[4871]: I0128 15:42:47.310016 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5jggg" podStartSLOduration=1.627822583 podStartE2EDuration="4.310002286s" podCreationTimestamp="2026-01-28 15:42:43 +0000 UTC" firstStartedPulling="2026-01-28 15:42:44.262358432 +0000 UTC m=+1516.158196754" lastFinishedPulling="2026-01-28 15:42:46.944538135 +0000 UTC m=+1518.840376457" observedRunningTime="2026-01-28 15:42:47.307987593 +0000 UTC m=+1519.203825925" watchObservedRunningTime="2026-01-28 15:42:47.310002286 +0000 UTC m=+1519.205840608" Jan 28 15:42:53 crc kubenswrapper[4871]: I0128 15:42:53.449697 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5jggg" Jan 28 15:42:53 crc kubenswrapper[4871]: I0128 15:42:53.450267 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5jggg" Jan 28 15:42:53 crc kubenswrapper[4871]: I0128 15:42:53.494321 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5jggg" Jan 28 15:42:54 crc kubenswrapper[4871]: I0128 15:42:54.393104 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5jggg" Jan 28 15:42:54 crc kubenswrapper[4871]: I0128 15:42:54.443293 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5jggg"] Jan 28 15:42:56 crc kubenswrapper[4871]: I0128 15:42:56.362788 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5jggg" podUID="6c585a47-0cc0-43a3-a229-4382b6e46b4f" containerName="registry-server" containerID="cri-o://57dc5eecf3ca25c4a453c89157a4f13f29b9afeb4bf95ab12d60db5c658dedfe" gracePeriod=2 Jan 28 15:42:57 crc kubenswrapper[4871]: I0128 15:42:57.374511 4871 generic.go:334] "Generic (PLEG): container finished" podID="6c585a47-0cc0-43a3-a229-4382b6e46b4f" containerID="57dc5eecf3ca25c4a453c89157a4f13f29b9afeb4bf95ab12d60db5c658dedfe" exitCode=0 Jan 28 15:42:57 crc kubenswrapper[4871]: I0128 15:42:57.374557 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jggg" event={"ID":"6c585a47-0cc0-43a3-a229-4382b6e46b4f","Type":"ContainerDied","Data":"57dc5eecf3ca25c4a453c89157a4f13f29b9afeb4bf95ab12d60db5c658dedfe"} Jan 28 15:42:57 crc kubenswrapper[4871]: I0128 15:42:57.969077 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jggg" Jan 28 15:42:58 crc kubenswrapper[4871]: I0128 15:42:58.033250 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crkm7\" (UniqueName: \"kubernetes.io/projected/6c585a47-0cc0-43a3-a229-4382b6e46b4f-kube-api-access-crkm7\") pod \"6c585a47-0cc0-43a3-a229-4382b6e46b4f\" (UID: \"6c585a47-0cc0-43a3-a229-4382b6e46b4f\") " Jan 28 15:42:58 crc kubenswrapper[4871]: I0128 15:42:58.033355 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c585a47-0cc0-43a3-a229-4382b6e46b4f-catalog-content\") pod \"6c585a47-0cc0-43a3-a229-4382b6e46b4f\" (UID: \"6c585a47-0cc0-43a3-a229-4382b6e46b4f\") " Jan 28 15:42:58 crc kubenswrapper[4871]: I0128 15:42:58.033405 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c585a47-0cc0-43a3-a229-4382b6e46b4f-utilities\") pod \"6c585a47-0cc0-43a3-a229-4382b6e46b4f\" (UID: \"6c585a47-0cc0-43a3-a229-4382b6e46b4f\") " Jan 28 15:42:58 crc kubenswrapper[4871]: I0128 15:42:58.034399 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c585a47-0cc0-43a3-a229-4382b6e46b4f-utilities" (OuterVolumeSpecName: "utilities") pod "6c585a47-0cc0-43a3-a229-4382b6e46b4f" (UID: "6c585a47-0cc0-43a3-a229-4382b6e46b4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:42:58 crc kubenswrapper[4871]: I0128 15:42:58.041468 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c585a47-0cc0-43a3-a229-4382b6e46b4f-kube-api-access-crkm7" (OuterVolumeSpecName: "kube-api-access-crkm7") pod "6c585a47-0cc0-43a3-a229-4382b6e46b4f" (UID: "6c585a47-0cc0-43a3-a229-4382b6e46b4f"). InnerVolumeSpecName "kube-api-access-crkm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:42:58 crc kubenswrapper[4871]: I0128 15:42:58.088424 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c585a47-0cc0-43a3-a229-4382b6e46b4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c585a47-0cc0-43a3-a229-4382b6e46b4f" (UID: "6c585a47-0cc0-43a3-a229-4382b6e46b4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:42:58 crc kubenswrapper[4871]: I0128 15:42:58.134834 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crkm7\" (UniqueName: \"kubernetes.io/projected/6c585a47-0cc0-43a3-a229-4382b6e46b4f-kube-api-access-crkm7\") on node \"crc\" DevicePath \"\"" Jan 28 15:42:58 crc kubenswrapper[4871]: I0128 15:42:58.134869 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c585a47-0cc0-43a3-a229-4382b6e46b4f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:42:58 crc kubenswrapper[4871]: I0128 15:42:58.134879 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c585a47-0cc0-43a3-a229-4382b6e46b4f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:42:58 crc kubenswrapper[4871]: I0128 15:42:58.386212 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jggg" event={"ID":"6c585a47-0cc0-43a3-a229-4382b6e46b4f","Type":"ContainerDied","Data":"c664e491e687e04eeaad5268be38f75fae2b365cdc4681db35f5fdbf41a07f5b"} Jan 28 15:42:58 crc kubenswrapper[4871]: I0128 15:42:58.386506 4871 scope.go:117] "RemoveContainer" containerID="57dc5eecf3ca25c4a453c89157a4f13f29b9afeb4bf95ab12d60db5c658dedfe" Jan 28 15:42:58 crc kubenswrapper[4871]: I0128 15:42:58.386300 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jggg" Jan 28 15:42:58 crc kubenswrapper[4871]: I0128 15:42:58.413659 4871 scope.go:117] "RemoveContainer" containerID="dd3fb229d8374572a2fc948800716e0ee31ffb2092d91d33903c72e85f6b55d1" Jan 28 15:42:58 crc kubenswrapper[4871]: I0128 15:42:58.428021 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5jggg"] Jan 28 15:42:58 crc kubenswrapper[4871]: I0128 15:42:58.443923 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5jggg"] Jan 28 15:42:58 crc kubenswrapper[4871]: I0128 15:42:58.444729 4871 scope.go:117] "RemoveContainer" containerID="18a6341f2da5696ed350311719a40cd8198056daf4db48d020d37375f1d46830" Jan 28 15:42:58 crc kubenswrapper[4871]: I0128 15:42:58.915793 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c585a47-0cc0-43a3-a229-4382b6e46b4f" path="/var/lib/kubelet/pods/6c585a47-0cc0-43a3-a229-4382b6e46b4f/volumes" Jan 28 15:43:03 crc kubenswrapper[4871]: I0128 15:43:03.871307 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-622lh"] Jan 28 15:43:03 crc kubenswrapper[4871]: E0128 15:43:03.872272 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c585a47-0cc0-43a3-a229-4382b6e46b4f" containerName="extract-utilities" Jan 28 15:43:03 crc kubenswrapper[4871]: I0128 15:43:03.872286 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c585a47-0cc0-43a3-a229-4382b6e46b4f" containerName="extract-utilities" Jan 28 15:43:03 crc kubenswrapper[4871]: E0128 15:43:03.872307 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c585a47-0cc0-43a3-a229-4382b6e46b4f" containerName="extract-content" Jan 28 15:43:03 crc kubenswrapper[4871]: I0128 15:43:03.872313 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c585a47-0cc0-43a3-a229-4382b6e46b4f" containerName="extract-content" Jan 28 15:43:03 crc kubenswrapper[4871]: E0128 15:43:03.872326 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c585a47-0cc0-43a3-a229-4382b6e46b4f" containerName="registry-server" Jan 28 15:43:03 crc kubenswrapper[4871]: I0128 15:43:03.872332 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c585a47-0cc0-43a3-a229-4382b6e46b4f" containerName="registry-server" Jan 28 15:43:03 crc kubenswrapper[4871]: I0128 15:43:03.872489 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c585a47-0cc0-43a3-a229-4382b6e46b4f" containerName="registry-server" Jan 28 15:43:03 crc kubenswrapper[4871]: I0128 15:43:03.873738 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-622lh" Jan 28 15:43:03 crc kubenswrapper[4871]: I0128 15:43:03.884697 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-622lh"] Jan 28 15:43:03 crc kubenswrapper[4871]: I0128 15:43:03.899415 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac2584a9-2eb8-4cc5-85b1-854bbe2fb976-catalog-content\") pod \"certified-operators-622lh\" (UID: \"ac2584a9-2eb8-4cc5-85b1-854bbe2fb976\") " pod="openshift-marketplace/certified-operators-622lh" Jan 28 15:43:03 crc kubenswrapper[4871]: I0128 15:43:03.899523 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac2584a9-2eb8-4cc5-85b1-854bbe2fb976-utilities\") pod \"certified-operators-622lh\" (UID: \"ac2584a9-2eb8-4cc5-85b1-854bbe2fb976\") " pod="openshift-marketplace/certified-operators-622lh" Jan 28 15:43:03 crc kubenswrapper[4871]: I0128 15:43:03.899648 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzpt7\" (UniqueName: \"kubernetes.io/projected/ac2584a9-2eb8-4cc5-85b1-854bbe2fb976-kube-api-access-mzpt7\") pod \"certified-operators-622lh\" (UID: \"ac2584a9-2eb8-4cc5-85b1-854bbe2fb976\") " pod="openshift-marketplace/certified-operators-622lh" Jan 28 15:43:04 crc kubenswrapper[4871]: I0128 15:43:04.001148 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac2584a9-2eb8-4cc5-85b1-854bbe2fb976-utilities\") pod \"certified-operators-622lh\" (UID: \"ac2584a9-2eb8-4cc5-85b1-854bbe2fb976\") " pod="openshift-marketplace/certified-operators-622lh" Jan 28 15:43:04 crc kubenswrapper[4871]: I0128 15:43:04.001336 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzpt7\" (UniqueName: \"kubernetes.io/projected/ac2584a9-2eb8-4cc5-85b1-854bbe2fb976-kube-api-access-mzpt7\") pod \"certified-operators-622lh\" (UID: \"ac2584a9-2eb8-4cc5-85b1-854bbe2fb976\") " pod="openshift-marketplace/certified-operators-622lh" Jan 28 15:43:04 crc kubenswrapper[4871]: I0128 15:43:04.001473 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac2584a9-2eb8-4cc5-85b1-854bbe2fb976-catalog-content\") pod \"certified-operators-622lh\" (UID: \"ac2584a9-2eb8-4cc5-85b1-854bbe2fb976\") " pod="openshift-marketplace/certified-operators-622lh" Jan 28 15:43:04 crc kubenswrapper[4871]: I0128 15:43:04.001706 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac2584a9-2eb8-4cc5-85b1-854bbe2fb976-utilities\") pod \"certified-operators-622lh\" (UID: \"ac2584a9-2eb8-4cc5-85b1-854bbe2fb976\") " pod="openshift-marketplace/certified-operators-622lh" Jan 28 15:43:04 crc kubenswrapper[4871]: I0128 15:43:04.001915 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac2584a9-2eb8-4cc5-85b1-854bbe2fb976-catalog-content\") pod \"certified-operators-622lh\" (UID: \"ac2584a9-2eb8-4cc5-85b1-854bbe2fb976\") " pod="openshift-marketplace/certified-operators-622lh" Jan 28 15:43:04 crc kubenswrapper[4871]: I0128 15:43:04.022394 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzpt7\" (UniqueName: \"kubernetes.io/projected/ac2584a9-2eb8-4cc5-85b1-854bbe2fb976-kube-api-access-mzpt7\") pod \"certified-operators-622lh\" (UID: \"ac2584a9-2eb8-4cc5-85b1-854bbe2fb976\") " pod="openshift-marketplace/certified-operators-622lh" Jan 28 15:43:04 crc kubenswrapper[4871]: I0128 15:43:04.205576 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-622lh" Jan 28 15:43:04 crc kubenswrapper[4871]: I0128 15:43:04.660317 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-622lh"] Jan 28 15:43:05 crc kubenswrapper[4871]: I0128 15:43:05.450804 4871 generic.go:334] "Generic (PLEG): container finished" podID="ac2584a9-2eb8-4cc5-85b1-854bbe2fb976" containerID="2c89fa6a856e8b8539e120d6e094c0d46ae675078209d0e40a17b86575ace0a6" exitCode=0 Jan 28 15:43:05 crc kubenswrapper[4871]: I0128 15:43:05.450864 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-622lh" event={"ID":"ac2584a9-2eb8-4cc5-85b1-854bbe2fb976","Type":"ContainerDied","Data":"2c89fa6a856e8b8539e120d6e094c0d46ae675078209d0e40a17b86575ace0a6"} Jan 28 15:43:05 crc kubenswrapper[4871]: I0128 15:43:05.451160 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-622lh" event={"ID":"ac2584a9-2eb8-4cc5-85b1-854bbe2fb976","Type":"ContainerStarted","Data":"f944072df1f5a383e06b7f58f340b8e3d31589a9dc714d01464c46d742c00ed4"} Jan 28 15:43:06 crc kubenswrapper[4871]: I0128 15:43:06.459506 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-622lh" event={"ID":"ac2584a9-2eb8-4cc5-85b1-854bbe2fb976","Type":"ContainerStarted","Data":"355d9d47037bdafbc6999117afdcff41a903cfc73f1422588617208f9f0eab51"} Jan 28 15:43:07 crc kubenswrapper[4871]: I0128 15:43:07.469058 4871 generic.go:334] "Generic (PLEG): container finished" podID="ac2584a9-2eb8-4cc5-85b1-854bbe2fb976" containerID="355d9d47037bdafbc6999117afdcff41a903cfc73f1422588617208f9f0eab51" exitCode=0 Jan 28 15:43:07 crc kubenswrapper[4871]: I0128 15:43:07.469113 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-622lh" event={"ID":"ac2584a9-2eb8-4cc5-85b1-854bbe2fb976","Type":"ContainerDied","Data":"355d9d47037bdafbc6999117afdcff41a903cfc73f1422588617208f9f0eab51"} Jan 28 15:43:08 crc kubenswrapper[4871]: I0128 15:43:08.482799 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-622lh" event={"ID":"ac2584a9-2eb8-4cc5-85b1-854bbe2fb976","Type":"ContainerStarted","Data":"ae3f2d9cc517c026fbc8216b80e9b7f158c87c9feb82b98516cfae35fd69122d"} Jan 28 15:43:08 crc kubenswrapper[4871]: I0128 15:43:08.507012 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-622lh" podStartSLOduration=3.078555363 podStartE2EDuration="5.506996579s" podCreationTimestamp="2026-01-28 15:43:03 +0000 UTC" firstStartedPulling="2026-01-28 15:43:05.45246536 +0000 UTC m=+1537.348303682" lastFinishedPulling="2026-01-28 15:43:07.880906586 +0000 UTC m=+1539.776744898" observedRunningTime="2026-01-28 15:43:08.503321105 +0000 UTC m=+1540.399159427" watchObservedRunningTime="2026-01-28 15:43:08.506996579 +0000 UTC m=+1540.402834891" Jan 28 15:43:14 crc kubenswrapper[4871]: I0128 15:43:14.206271 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-622lh" Jan 28 15:43:14 crc kubenswrapper[4871]: I0128 15:43:14.207270 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-622lh" Jan 28 15:43:14 crc kubenswrapper[4871]: I0128 15:43:14.251326 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-622lh" Jan 28 15:43:14 crc kubenswrapper[4871]: I0128 15:43:14.615253 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-622lh" Jan 28 15:43:14 crc kubenswrapper[4871]: I0128 15:43:14.666000 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-622lh"] Jan 28 15:43:16 crc kubenswrapper[4871]: I0128 15:43:16.542651 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-622lh" podUID="ac2584a9-2eb8-4cc5-85b1-854bbe2fb976" containerName="registry-server" containerID="cri-o://ae3f2d9cc517c026fbc8216b80e9b7f158c87c9feb82b98516cfae35fd69122d" gracePeriod=2 Jan 28 15:43:17 crc kubenswrapper[4871]: I0128 15:43:17.507814 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-622lh" Jan 28 15:43:17 crc kubenswrapper[4871]: I0128 15:43:17.598388 4871 generic.go:334] "Generic (PLEG): container finished" podID="ac2584a9-2eb8-4cc5-85b1-854bbe2fb976" containerID="ae3f2d9cc517c026fbc8216b80e9b7f158c87c9feb82b98516cfae35fd69122d" exitCode=0 Jan 28 15:43:17 crc kubenswrapper[4871]: I0128 15:43:17.598431 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-622lh" event={"ID":"ac2584a9-2eb8-4cc5-85b1-854bbe2fb976","Type":"ContainerDied","Data":"ae3f2d9cc517c026fbc8216b80e9b7f158c87c9feb82b98516cfae35fd69122d"} Jan 28 15:43:17 crc kubenswrapper[4871]: I0128 15:43:17.598457 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-622lh" event={"ID":"ac2584a9-2eb8-4cc5-85b1-854bbe2fb976","Type":"ContainerDied","Data":"f944072df1f5a383e06b7f58f340b8e3d31589a9dc714d01464c46d742c00ed4"} Jan 28 15:43:17 crc kubenswrapper[4871]: I0128 15:43:17.598473 4871 scope.go:117] "RemoveContainer" containerID="ae3f2d9cc517c026fbc8216b80e9b7f158c87c9feb82b98516cfae35fd69122d" Jan 28 15:43:17 crc kubenswrapper[4871]: I0128 15:43:17.598616 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-622lh" Jan 28 15:43:17 crc kubenswrapper[4871]: I0128 15:43:17.632220 4871 scope.go:117] "RemoveContainer" containerID="355d9d47037bdafbc6999117afdcff41a903cfc73f1422588617208f9f0eab51" Jan 28 15:43:17 crc kubenswrapper[4871]: I0128 15:43:17.632286 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac2584a9-2eb8-4cc5-85b1-854bbe2fb976-utilities\") pod \"ac2584a9-2eb8-4cc5-85b1-854bbe2fb976\" (UID: \"ac2584a9-2eb8-4cc5-85b1-854bbe2fb976\") " Jan 28 15:43:17 crc kubenswrapper[4871]: I0128 15:43:17.632351 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac2584a9-2eb8-4cc5-85b1-854bbe2fb976-catalog-content\") pod \"ac2584a9-2eb8-4cc5-85b1-854bbe2fb976\" (UID: \"ac2584a9-2eb8-4cc5-85b1-854bbe2fb976\") " Jan 28 15:43:17 crc kubenswrapper[4871]: I0128 15:43:17.632455 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzpt7\" (UniqueName: \"kubernetes.io/projected/ac2584a9-2eb8-4cc5-85b1-854bbe2fb976-kube-api-access-mzpt7\") pod \"ac2584a9-2eb8-4cc5-85b1-854bbe2fb976\" (UID: \"ac2584a9-2eb8-4cc5-85b1-854bbe2fb976\") " Jan 28 15:43:17 crc kubenswrapper[4871]: I0128 15:43:17.633766 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac2584a9-2eb8-4cc5-85b1-854bbe2fb976-utilities" (OuterVolumeSpecName: "utilities") pod "ac2584a9-2eb8-4cc5-85b1-854bbe2fb976" (UID: "ac2584a9-2eb8-4cc5-85b1-854bbe2fb976"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:43:17 crc kubenswrapper[4871]: I0128 15:43:17.638809 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac2584a9-2eb8-4cc5-85b1-854bbe2fb976-kube-api-access-mzpt7" (OuterVolumeSpecName: "kube-api-access-mzpt7") pod "ac2584a9-2eb8-4cc5-85b1-854bbe2fb976" (UID: "ac2584a9-2eb8-4cc5-85b1-854bbe2fb976"). InnerVolumeSpecName "kube-api-access-mzpt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:43:17 crc kubenswrapper[4871]: I0128 15:43:17.690572 4871 scope.go:117] "RemoveContainer" containerID="2c89fa6a856e8b8539e120d6e094c0d46ae675078209d0e40a17b86575ace0a6" Jan 28 15:43:17 crc kubenswrapper[4871]: I0128 15:43:17.720680 4871 scope.go:117] "RemoveContainer" containerID="ae3f2d9cc517c026fbc8216b80e9b7f158c87c9feb82b98516cfae35fd69122d" Jan 28 15:43:17 crc kubenswrapper[4871]: E0128 15:43:17.720974 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae3f2d9cc517c026fbc8216b80e9b7f158c87c9feb82b98516cfae35fd69122d\": container with ID starting with ae3f2d9cc517c026fbc8216b80e9b7f158c87c9feb82b98516cfae35fd69122d not found: ID does not exist" containerID="ae3f2d9cc517c026fbc8216b80e9b7f158c87c9feb82b98516cfae35fd69122d" Jan 28 15:43:17 crc kubenswrapper[4871]: I0128 15:43:17.721006 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae3f2d9cc517c026fbc8216b80e9b7f158c87c9feb82b98516cfae35fd69122d"} err="failed to get container status \"ae3f2d9cc517c026fbc8216b80e9b7f158c87c9feb82b98516cfae35fd69122d\": rpc error: code = NotFound desc = could not find container \"ae3f2d9cc517c026fbc8216b80e9b7f158c87c9feb82b98516cfae35fd69122d\": container with ID starting with ae3f2d9cc517c026fbc8216b80e9b7f158c87c9feb82b98516cfae35fd69122d not found: ID does not exist" Jan 28 15:43:17 crc kubenswrapper[4871]: I0128 15:43:17.721028 4871 scope.go:117] "RemoveContainer" containerID="355d9d47037bdafbc6999117afdcff41a903cfc73f1422588617208f9f0eab51" Jan 28 15:43:17 crc kubenswrapper[4871]: E0128 15:43:17.721223 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"355d9d47037bdafbc6999117afdcff41a903cfc73f1422588617208f9f0eab51\": container with ID starting with 355d9d47037bdafbc6999117afdcff41a903cfc73f1422588617208f9f0eab51 not found: ID does not exist" containerID="355d9d47037bdafbc6999117afdcff41a903cfc73f1422588617208f9f0eab51" Jan 28 15:43:17 crc kubenswrapper[4871]: I0128 15:43:17.721248 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"355d9d47037bdafbc6999117afdcff41a903cfc73f1422588617208f9f0eab51"} err="failed to get container status \"355d9d47037bdafbc6999117afdcff41a903cfc73f1422588617208f9f0eab51\": rpc error: code = NotFound desc = could not find container \"355d9d47037bdafbc6999117afdcff41a903cfc73f1422588617208f9f0eab51\": container with ID starting with 355d9d47037bdafbc6999117afdcff41a903cfc73f1422588617208f9f0eab51 not found: ID does not exist" Jan 28 15:43:17 crc kubenswrapper[4871]: I0128 15:43:17.721262 4871 scope.go:117] "RemoveContainer" containerID="2c89fa6a856e8b8539e120d6e094c0d46ae675078209d0e40a17b86575ace0a6" Jan 28 15:43:17 crc kubenswrapper[4871]: E0128 15:43:17.721537 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c89fa6a856e8b8539e120d6e094c0d46ae675078209d0e40a17b86575ace0a6\": container with ID starting with 2c89fa6a856e8b8539e120d6e094c0d46ae675078209d0e40a17b86575ace0a6 not found: ID does not exist" containerID="2c89fa6a856e8b8539e120d6e094c0d46ae675078209d0e40a17b86575ace0a6" Jan 28 15:43:17 crc kubenswrapper[4871]: I0128 15:43:17.721609 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c89fa6a856e8b8539e120d6e094c0d46ae675078209d0e40a17b86575ace0a6"} err="failed to get container status \"2c89fa6a856e8b8539e120d6e094c0d46ae675078209d0e40a17b86575ace0a6\": rpc error: code = NotFound desc = could not find container \"2c89fa6a856e8b8539e120d6e094c0d46ae675078209d0e40a17b86575ace0a6\": container with ID starting with 2c89fa6a856e8b8539e120d6e094c0d46ae675078209d0e40a17b86575ace0a6 not found: ID does not exist" Jan 28 15:43:17 crc kubenswrapper[4871]: I0128 15:43:17.735035 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzpt7\" (UniqueName: \"kubernetes.io/projected/ac2584a9-2eb8-4cc5-85b1-854bbe2fb976-kube-api-access-mzpt7\") on node \"crc\" DevicePath \"\"" Jan 28 15:43:17 crc kubenswrapper[4871]: I0128 15:43:17.735074 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac2584a9-2eb8-4cc5-85b1-854bbe2fb976-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:43:18 crc kubenswrapper[4871]: I0128 15:43:18.154290 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac2584a9-2eb8-4cc5-85b1-854bbe2fb976-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac2584a9-2eb8-4cc5-85b1-854bbe2fb976" (UID: "ac2584a9-2eb8-4cc5-85b1-854bbe2fb976"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:43:18 crc kubenswrapper[4871]: I0128 15:43:18.239953 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-622lh"] Jan 28 15:43:18 crc kubenswrapper[4871]: I0128 15:43:18.243930 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac2584a9-2eb8-4cc5-85b1-854bbe2fb976-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:43:18 crc kubenswrapper[4871]: I0128 15:43:18.246334 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-622lh"] Jan 28 15:43:18 crc kubenswrapper[4871]: I0128 15:43:18.913161 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac2584a9-2eb8-4cc5-85b1-854bbe2fb976" path="/var/lib/kubelet/pods/ac2584a9-2eb8-4cc5-85b1-854bbe2fb976/volumes" Jan 28 15:44:30 crc kubenswrapper[4871]: I0128 15:44:30.093045 4871 scope.go:117] "RemoveContainer" containerID="4712d33e245c90713521b4f4c39b4b38850cc29b9e7828bdedfa85ec129de186" Jan 28 15:44:30 crc kubenswrapper[4871]: I0128 15:44:30.125786 4871 scope.go:117] "RemoveContainer" containerID="a168f71adb7b17620d61493b4713d0933e0fe77a619ded51a954b0d27fc1be10" Jan 28 15:44:30 crc kubenswrapper[4871]: I0128 15:44:30.174834 4871 scope.go:117] "RemoveContainer" containerID="a907faa39ffd998edd2434303c38467a92356e4b6ccf376f4017bb2cd6a22426" Jan 28 15:44:30 crc kubenswrapper[4871]: I0128 15:44:30.221004 4871 scope.go:117] "RemoveContainer" containerID="de425b69a7249fd5458c804684161b32a996773362a4c2597bae28654c7a82be" Jan 28 15:44:43 crc kubenswrapper[4871]: I0128 15:44:43.813971 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:44:43 crc kubenswrapper[4871]: I0128 15:44:43.814707 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:45:00 crc kubenswrapper[4871]: I0128 15:45:00.146936 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493585-9b2sx"] Jan 28 15:45:00 crc kubenswrapper[4871]: E0128 15:45:00.148119 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac2584a9-2eb8-4cc5-85b1-854bbe2fb976" containerName="registry-server" Jan 28 15:45:00 crc kubenswrapper[4871]: I0128 15:45:00.148139 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac2584a9-2eb8-4cc5-85b1-854bbe2fb976" containerName="registry-server" Jan 28 15:45:00 crc kubenswrapper[4871]: E0128 15:45:00.148173 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac2584a9-2eb8-4cc5-85b1-854bbe2fb976" containerName="extract-content" Jan 28 15:45:00 crc kubenswrapper[4871]: I0128 15:45:00.148181 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac2584a9-2eb8-4cc5-85b1-854bbe2fb976" containerName="extract-content" Jan 28 15:45:00 crc kubenswrapper[4871]: E0128 15:45:00.148200 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac2584a9-2eb8-4cc5-85b1-854bbe2fb976" containerName="extract-utilities" Jan 28 15:45:00 crc kubenswrapper[4871]: I0128 15:45:00.148208 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac2584a9-2eb8-4cc5-85b1-854bbe2fb976" containerName="extract-utilities" Jan 28 15:45:00 crc kubenswrapper[4871]: I0128 15:45:00.148407 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac2584a9-2eb8-4cc5-85b1-854bbe2fb976" containerName="registry-server" Jan 28 15:45:00 crc kubenswrapper[4871]: I0128 15:45:00.149268 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-9b2sx" Jan 28 15:45:00 crc kubenswrapper[4871]: I0128 15:45:00.155269 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 15:45:00 crc kubenswrapper[4871]: I0128 15:45:00.156736 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 15:45:00 crc kubenswrapper[4871]: I0128 15:45:00.167540 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493585-9b2sx"] Jan 28 15:45:00 crc kubenswrapper[4871]: I0128 15:45:00.303616 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08cabc60-22e8-477e-bc39-0fb426e43c5d-config-volume\") pod \"collect-profiles-29493585-9b2sx\" (UID: \"08cabc60-22e8-477e-bc39-0fb426e43c5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-9b2sx" Jan 28 15:45:00 crc kubenswrapper[4871]: I0128 15:45:00.303737 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08cabc60-22e8-477e-bc39-0fb426e43c5d-secret-volume\") pod \"collect-profiles-29493585-9b2sx\" (UID: \"08cabc60-22e8-477e-bc39-0fb426e43c5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-9b2sx" Jan 28 15:45:00 crc kubenswrapper[4871]: I0128 15:45:00.303767 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmczg\" (UniqueName: \"kubernetes.io/projected/08cabc60-22e8-477e-bc39-0fb426e43c5d-kube-api-access-kmczg\") pod \"collect-profiles-29493585-9b2sx\" (UID: \"08cabc60-22e8-477e-bc39-0fb426e43c5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-9b2sx" Jan 28 15:45:00 crc kubenswrapper[4871]: I0128 15:45:00.405058 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmczg\" (UniqueName: \"kubernetes.io/projected/08cabc60-22e8-477e-bc39-0fb426e43c5d-kube-api-access-kmczg\") pod \"collect-profiles-29493585-9b2sx\" (UID: \"08cabc60-22e8-477e-bc39-0fb426e43c5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-9b2sx" Jan 28 15:45:00 crc kubenswrapper[4871]: I0128 15:45:00.405110 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08cabc60-22e8-477e-bc39-0fb426e43c5d-secret-volume\") pod \"collect-profiles-29493585-9b2sx\" (UID: \"08cabc60-22e8-477e-bc39-0fb426e43c5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-9b2sx" Jan 28 15:45:00 crc kubenswrapper[4871]: I0128 15:45:00.405198 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08cabc60-22e8-477e-bc39-0fb426e43c5d-config-volume\") pod \"collect-profiles-29493585-9b2sx\" (UID: \"08cabc60-22e8-477e-bc39-0fb426e43c5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-9b2sx" Jan 28 15:45:00 crc kubenswrapper[4871]: I0128 15:45:00.406134 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08cabc60-22e8-477e-bc39-0fb426e43c5d-config-volume\") pod \"collect-profiles-29493585-9b2sx\" (UID: \"08cabc60-22e8-477e-bc39-0fb426e43c5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-9b2sx" Jan 28 15:45:00 crc kubenswrapper[4871]: I0128 15:45:00.420468 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08cabc60-22e8-477e-bc39-0fb426e43c5d-secret-volume\") pod \"collect-profiles-29493585-9b2sx\" (UID: \"08cabc60-22e8-477e-bc39-0fb426e43c5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-9b2sx" Jan 28 15:45:00 crc kubenswrapper[4871]: I0128 15:45:00.424645 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmczg\" (UniqueName: \"kubernetes.io/projected/08cabc60-22e8-477e-bc39-0fb426e43c5d-kube-api-access-kmczg\") pod \"collect-profiles-29493585-9b2sx\" (UID: \"08cabc60-22e8-477e-bc39-0fb426e43c5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-9b2sx" Jan 28 15:45:00 crc kubenswrapper[4871]: I0128 15:45:00.469220 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-9b2sx" Jan 28 15:45:00 crc kubenswrapper[4871]: I0128 15:45:00.883706 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493585-9b2sx"] Jan 28 15:45:01 crc kubenswrapper[4871]: I0128 15:45:01.455889 4871 generic.go:334] "Generic (PLEG): container finished" podID="08cabc60-22e8-477e-bc39-0fb426e43c5d" containerID="0aeaa01c09acfd34adbc1e0c83f7bfbf3372a78ccfb629b9bb7da7924d036750" exitCode=0 Jan 28 15:45:01 crc kubenswrapper[4871]: I0128 15:45:01.455964 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-9b2sx" event={"ID":"08cabc60-22e8-477e-bc39-0fb426e43c5d","Type":"ContainerDied","Data":"0aeaa01c09acfd34adbc1e0c83f7bfbf3372a78ccfb629b9bb7da7924d036750"} Jan 28 15:45:01 crc kubenswrapper[4871]: I0128 15:45:01.456294 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-9b2sx" event={"ID":"08cabc60-22e8-477e-bc39-0fb426e43c5d","Type":"ContainerStarted","Data":"fdf5e78f7ba98d08c630c1733c70cdccf010ff57010e689d3240dda700b6beb2"} Jan 28 15:45:02 crc kubenswrapper[4871]: I0128 15:45:02.764594 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-9b2sx" Jan 28 15:45:02 crc kubenswrapper[4871]: I0128 15:45:02.844423 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmczg\" (UniqueName: \"kubernetes.io/projected/08cabc60-22e8-477e-bc39-0fb426e43c5d-kube-api-access-kmczg\") pod \"08cabc60-22e8-477e-bc39-0fb426e43c5d\" (UID: \"08cabc60-22e8-477e-bc39-0fb426e43c5d\") " Jan 28 15:45:02 crc kubenswrapper[4871]: I0128 15:45:02.844617 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08cabc60-22e8-477e-bc39-0fb426e43c5d-config-volume\") pod \"08cabc60-22e8-477e-bc39-0fb426e43c5d\" (UID: \"08cabc60-22e8-477e-bc39-0fb426e43c5d\") " Jan 28 15:45:02 crc kubenswrapper[4871]: I0128 15:45:02.844713 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08cabc60-22e8-477e-bc39-0fb426e43c5d-secret-volume\") pod \"08cabc60-22e8-477e-bc39-0fb426e43c5d\" (UID: \"08cabc60-22e8-477e-bc39-0fb426e43c5d\") " Jan 28 15:45:02 crc kubenswrapper[4871]: I0128 15:45:02.845751 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08cabc60-22e8-477e-bc39-0fb426e43c5d-config-volume" (OuterVolumeSpecName: "config-volume") pod "08cabc60-22e8-477e-bc39-0fb426e43c5d" (UID: "08cabc60-22e8-477e-bc39-0fb426e43c5d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:02 crc kubenswrapper[4871]: I0128 15:45:02.851270 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08cabc60-22e8-477e-bc39-0fb426e43c5d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "08cabc60-22e8-477e-bc39-0fb426e43c5d" (UID: "08cabc60-22e8-477e-bc39-0fb426e43c5d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:02 crc kubenswrapper[4871]: I0128 15:45:02.851800 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08cabc60-22e8-477e-bc39-0fb426e43c5d-kube-api-access-kmczg" (OuterVolumeSpecName: "kube-api-access-kmczg") pod "08cabc60-22e8-477e-bc39-0fb426e43c5d" (UID: "08cabc60-22e8-477e-bc39-0fb426e43c5d"). InnerVolumeSpecName "kube-api-access-kmczg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:02 crc kubenswrapper[4871]: I0128 15:45:02.946812 4871 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08cabc60-22e8-477e-bc39-0fb426e43c5d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:02 crc kubenswrapper[4871]: I0128 15:45:02.947404 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmczg\" (UniqueName: \"kubernetes.io/projected/08cabc60-22e8-477e-bc39-0fb426e43c5d-kube-api-access-kmczg\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:02 crc kubenswrapper[4871]: I0128 15:45:02.947465 4871 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08cabc60-22e8-477e-bc39-0fb426e43c5d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:03 crc kubenswrapper[4871]: I0128 15:45:03.472522 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-9b2sx" event={"ID":"08cabc60-22e8-477e-bc39-0fb426e43c5d","Type":"ContainerDied","Data":"fdf5e78f7ba98d08c630c1733c70cdccf010ff57010e689d3240dda700b6beb2"} Jan 28 15:45:03 crc kubenswrapper[4871]: I0128 15:45:03.472573 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdf5e78f7ba98d08c630c1733c70cdccf010ff57010e689d3240dda700b6beb2" Jan 28 15:45:03 crc kubenswrapper[4871]: I0128 15:45:03.472613 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-9b2sx" Jan 28 15:45:13 crc kubenswrapper[4871]: I0128 15:45:13.814291 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:45:13 crc kubenswrapper[4871]: I0128 15:45:13.815248 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:45:43 crc kubenswrapper[4871]: I0128 15:45:43.813885 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:45:43 crc kubenswrapper[4871]: I0128 15:45:43.815762 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:45:43 crc kubenswrapper[4871]: I0128 15:45:43.815847 4871 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 15:45:43 crc kubenswrapper[4871]: I0128 15:45:43.816529 4871 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6"} pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:45:43 crc kubenswrapper[4871]: I0128 15:45:43.816578 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" containerID="cri-o://7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" gracePeriod=600 Jan 28 15:45:43 crc kubenswrapper[4871]: E0128 15:45:43.942843 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:45:44 crc kubenswrapper[4871]: I0128 15:45:44.808819 4871 generic.go:334] "Generic (PLEG): container finished" podID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" exitCode=0 Jan 28 15:45:44 crc kubenswrapper[4871]: I0128 15:45:44.809278 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerDied","Data":"7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6"} Jan 28 15:45:44 crc kubenswrapper[4871]: I0128 15:45:44.809330 4871 scope.go:117] "RemoveContainer" containerID="65517cc51f93a0a3ef6adac2d141e0b3335a9afba2b2148de081b59fef76ee0c" Jan 28 15:45:44 crc kubenswrapper[4871]: I0128 15:45:44.810072 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:45:44 crc kubenswrapper[4871]: E0128 15:45:44.810364 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:45:56 crc kubenswrapper[4871]: I0128 15:45:56.904722 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:45:56 crc kubenswrapper[4871]: E0128 15:45:56.906931 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:46:07 crc kubenswrapper[4871]: I0128 15:46:07.903827 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:46:07 crc kubenswrapper[4871]: E0128 15:46:07.904829 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:46:22 crc kubenswrapper[4871]: I0128 15:46:22.904266 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:46:22 crc kubenswrapper[4871]: E0128 15:46:22.907125 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:46:36 crc kubenswrapper[4871]: I0128 15:46:36.904120 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:46:36 crc kubenswrapper[4871]: E0128 15:46:36.905331 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:46:51 crc kubenswrapper[4871]: I0128 15:46:51.904392 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:46:51 crc kubenswrapper[4871]: E0128 15:46:51.905315 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:47:03 crc kubenswrapper[4871]: I0128 15:47:03.904138 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:47:03 crc kubenswrapper[4871]: E0128 15:47:03.905115 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:47:15 crc kubenswrapper[4871]: I0128 15:47:15.904951 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:47:15 crc kubenswrapper[4871]: E0128 15:47:15.905931 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:47:29 crc kubenswrapper[4871]: I0128 15:47:29.904827 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:47:29 crc kubenswrapper[4871]: E0128 15:47:29.905737 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:47:43 crc kubenswrapper[4871]: I0128 15:47:43.904635 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:47:43 crc kubenswrapper[4871]: E0128 15:47:43.905497 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:47:52 crc kubenswrapper[4871]: I0128 15:47:52.047734 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-dlmwk"] Jan 28 15:47:52 crc kubenswrapper[4871]: I0128 15:47:52.060981 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-dlmwk"] Jan 28 15:47:52 crc kubenswrapper[4871]: I0128 15:47:52.070305 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-4s628"] Jan 28 15:47:52 crc kubenswrapper[4871]: I0128 15:47:52.079504 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-pcq2c"] Jan 28 15:47:52 crc kubenswrapper[4871]: I0128 15:47:52.088019 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-c48d-account-create-update-6plbm"] Jan 28 15:47:52 crc kubenswrapper[4871]: I0128 15:47:52.103783 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1112-account-create-update-q2w6s"] Jan 28 15:47:52 crc kubenswrapper[4871]: I0128 15:47:52.115082 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7d19-account-create-update-fvmz8"] Jan 28 15:47:52 crc kubenswrapper[4871]: I0128 15:47:52.123135 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-pcq2c"] Jan 28 15:47:52 crc kubenswrapper[4871]: I0128 15:47:52.130811 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-4s628"] Jan 28 15:47:52 crc kubenswrapper[4871]: I0128 15:47:52.137470 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-1112-account-create-update-q2w6s"] Jan 28 15:47:52 crc kubenswrapper[4871]: I0128 15:47:52.144114 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-c48d-account-create-update-6plbm"] Jan 28 15:47:52 crc kubenswrapper[4871]: I0128 15:47:52.151554 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7d19-account-create-update-fvmz8"] Jan 28 15:47:52 crc kubenswrapper[4871]: I0128 15:47:52.918950 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c6a3a96-8a57-4097-a921-58250c387ddc" path="/var/lib/kubelet/pods/0c6a3a96-8a57-4097-a921-58250c387ddc/volumes" Jan 28 15:47:52 crc kubenswrapper[4871]: I0128 15:47:52.919990 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dc6911f-8892-4a6a-99c8-d9cee0eac352" path="/var/lib/kubelet/pods/2dc6911f-8892-4a6a-99c8-d9cee0eac352/volumes" Jan 28 15:47:52 crc kubenswrapper[4871]: I0128 15:47:52.920832 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62e4563a-ee68-409e-b5c4-f4c53657c71d" path="/var/lib/kubelet/pods/62e4563a-ee68-409e-b5c4-f4c53657c71d/volumes" Jan 28 15:47:52 crc kubenswrapper[4871]: I0128 15:47:52.921780 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9" path="/var/lib/kubelet/pods/7f6fbeb5-1fc1-4012-9e72-8e74f72f7cd9/volumes" Jan 28 15:47:52 crc kubenswrapper[4871]: I0128 15:47:52.923709 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b566ceb5-8640-40f3-bbf1-c0e11f82602c" path="/var/lib/kubelet/pods/b566ceb5-8640-40f3-bbf1-c0e11f82602c/volumes" Jan 28 15:47:52 crc kubenswrapper[4871]: I0128 15:47:52.925066 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf5794fa-0b86-4d4a-a9ba-500a4834e315" path="/var/lib/kubelet/pods/cf5794fa-0b86-4d4a-a9ba-500a4834e315/volumes" Jan 28 15:47:57 crc kubenswrapper[4871]: I0128 15:47:57.904095 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:47:57 crc kubenswrapper[4871]: E0128 15:47:57.904920 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:48:08 crc kubenswrapper[4871]: I0128 15:48:08.908777 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:48:08 crc kubenswrapper[4871]: E0128 15:48:08.909467 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:48:19 crc kubenswrapper[4871]: I0128 15:48:19.070107 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-whvv9"] Jan 28 15:48:19 crc kubenswrapper[4871]: I0128 15:48:19.076997 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-whvv9"] Jan 28 15:48:20 crc kubenswrapper[4871]: I0128 15:48:20.913655 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="795d6166-93a3-4371-a502-b18f09b9374f" path="/var/lib/kubelet/pods/795d6166-93a3-4371-a502-b18f09b9374f/volumes" Jan 28 15:48:23 crc kubenswrapper[4871]: I0128 15:48:23.904112 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:48:23 crc kubenswrapper[4871]: E0128 15:48:23.905258 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:48:27 crc kubenswrapper[4871]: I0128 15:48:27.031711 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-mtkgx"] Jan 28 15:48:27 crc kubenswrapper[4871]: I0128 15:48:27.040496 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-mtkgx"] Jan 28 15:48:27 crc kubenswrapper[4871]: I0128 15:48:27.050814 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9205-account-create-update-cdfq6"] Jan 28 15:48:27 crc kubenswrapper[4871]: I0128 15:48:27.062006 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-pv7m2"] Jan 28 15:48:27 crc kubenswrapper[4871]: I0128 15:48:27.069507 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-eacc-account-create-update-lc94g"] Jan 28 15:48:27 crc kubenswrapper[4871]: I0128 15:48:27.076149 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-eacc-account-create-update-lc94g"] Jan 28 15:48:27 crc kubenswrapper[4871]: I0128 15:48:27.083224 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-9205-account-create-update-cdfq6"] Jan 28 15:48:27 crc kubenswrapper[4871]: I0128 15:48:27.090339 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-pv7m2"] Jan 28 15:48:28 crc kubenswrapper[4871]: I0128 15:48:28.032011 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7c27-account-create-update-4vvzx"] Jan 28 15:48:28 crc kubenswrapper[4871]: I0128 15:48:28.041627 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-bfzvn"] Jan 28 15:48:28 crc kubenswrapper[4871]: I0128 15:48:28.051043 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7c27-account-create-update-4vvzx"] Jan 28 15:48:28 crc kubenswrapper[4871]: I0128 15:48:28.059673 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-bfzvn"] Jan 28 15:48:28 crc kubenswrapper[4871]: I0128 15:48:28.916157 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c9ac2bf-e1f3-47de-8c33-14098a2217e4" path="/var/lib/kubelet/pods/1c9ac2bf-e1f3-47de-8c33-14098a2217e4/volumes" Jan 28 15:48:28 crc kubenswrapper[4871]: I0128 15:48:28.917082 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bbb2fd8-2668-47de-b3cd-94371c8baa8d" path="/var/lib/kubelet/pods/4bbb2fd8-2668-47de-b3cd-94371c8baa8d/volumes" Jan 28 15:48:28 crc kubenswrapper[4871]: I0128 15:48:28.918150 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af" path="/var/lib/kubelet/pods/7ec4cd59-230f-4dc8-b2bd-f44d4f6cd1af/volumes" Jan 28 15:48:28 crc kubenswrapper[4871]: I0128 15:48:28.918983 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e" path="/var/lib/kubelet/pods/a9be2fd9-a59f-43ef-bf12-58c6d3bcf07e/volumes" Jan 28 15:48:28 crc kubenswrapper[4871]: I0128 15:48:28.920554 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b189b010-97b8-43c7-8021-05cb6f277f13" path="/var/lib/kubelet/pods/b189b010-97b8-43c7-8021-05cb6f277f13/volumes" Jan 28 15:48:28 crc kubenswrapper[4871]: I0128 15:48:28.921110 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b276b666-fe5b-4c85-8c89-3854e2bdbfc3" path="/var/lib/kubelet/pods/b276b666-fe5b-4c85-8c89-3854e2bdbfc3/volumes" Jan 28 15:48:30 crc kubenswrapper[4871]: I0128 15:48:30.341460 4871 scope.go:117] "RemoveContainer" containerID="73de87bb082df8e7d269f1357755647880224adb3d3dc9afae13df19a33b76dc" Jan 28 15:48:30 crc kubenswrapper[4871]: I0128 15:48:30.404788 4871 scope.go:117] "RemoveContainer" containerID="c80e3cb9a24e4376e616c5771654172e0d2f5c132360823fe57fbf0958bcfd1f" Jan 28 15:48:30 crc kubenswrapper[4871]: I0128 15:48:30.461526 4871 scope.go:117] "RemoveContainer" containerID="1c80e9c34abfefe75a00a78c48ef3cf07b099eb9d19f3088872c9eae58048805" Jan 28 15:48:30 crc kubenswrapper[4871]: I0128 15:48:30.495043 4871 scope.go:117] "RemoveContainer" containerID="f084090db21acc11080672ab9665821f725f10cf65033b57ac0b0890b0e859a1" Jan 28 15:48:30 crc kubenswrapper[4871]: I0128 15:48:30.536067 4871 scope.go:117] "RemoveContainer" containerID="5bb6ef37566f2fddaa54c3ebf69088dc134fea5a7a9af37d0b670c9bbd5dcaa6" Jan 28 15:48:30 crc kubenswrapper[4871]: I0128 15:48:30.572909 4871 scope.go:117] "RemoveContainer" containerID="abf32b777bfe9cec1c7a7c854167e55fe4451d16dc7135cec16ad0a19466c771" Jan 28 15:48:30 crc kubenswrapper[4871]: I0128 15:48:30.617775 4871 scope.go:117] "RemoveContainer" containerID="576e70132aa7e3928671d95b5af42a5f7d24b87b5f6f7945b4f7815073df39fc" Jan 28 15:48:30 crc kubenswrapper[4871]: I0128 15:48:30.635960 4871 scope.go:117] "RemoveContainer" containerID="cefd8b9f3cf39496dab49884be2940c9c5d05195abb50d8ebe50261d75cc6376" Jan 28 15:48:30 crc kubenswrapper[4871]: I0128 15:48:30.656373 4871 scope.go:117] "RemoveContainer" containerID="7c2d90c2179017626d8a8e94f80887a005deb3ff2e0f1ae7a74ae38d1ef9494f" Jan 28 15:48:30 crc kubenswrapper[4871]: I0128 15:48:30.676430 4871 scope.go:117] "RemoveContainer" containerID="b0b6e4fd6be3235029f7796b4bb5c5192c1bfb3b8464abd694389ba1e8565b5e" Jan 28 15:48:30 crc kubenswrapper[4871]: I0128 15:48:30.696560 4871 scope.go:117] "RemoveContainer" containerID="25f014d0df7c1bc14135cf91e1ff4ac240e161ecffdd633482bb37e5ac65f583" Jan 28 15:48:30 crc kubenswrapper[4871]: I0128 15:48:30.726504 4871 scope.go:117] "RemoveContainer" containerID="f53e6eb592f33c9aa514fc180121e16c2542e27dccdd5dd9bd717bc40aedb24a" Jan 28 15:48:30 crc kubenswrapper[4871]: I0128 15:48:30.745068 4871 scope.go:117] "RemoveContainer" containerID="cbece9d0f209df3d64601d227690cf1ca9461182b86d1d7ad94ebb55e395612d" Jan 28 15:48:36 crc kubenswrapper[4871]: I0128 15:48:36.903531 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:48:36 crc kubenswrapper[4871]: E0128 15:48:36.904325 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:48:43 crc kubenswrapper[4871]: I0128 15:48:43.054537 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-s99pp"] Jan 28 15:48:43 crc kubenswrapper[4871]: I0128 15:48:43.060782 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-s99pp"] Jan 28 15:48:44 crc kubenswrapper[4871]: I0128 15:48:44.926183 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02722604-9d90-40f1-9518-ee221fecdca0" path="/var/lib/kubelet/pods/02722604-9d90-40f1-9518-ee221fecdca0/volumes" Jan 28 15:48:47 crc kubenswrapper[4871]: I0128 15:48:47.903975 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:48:47 crc kubenswrapper[4871]: E0128 15:48:47.904663 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:49:00 crc kubenswrapper[4871]: I0128 15:49:00.904336 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:49:00 crc kubenswrapper[4871]: E0128 15:49:00.905156 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:49:14 crc kubenswrapper[4871]: I0128 15:49:14.904143 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:49:14 crc kubenswrapper[4871]: E0128 15:49:14.905202 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:49:26 crc kubenswrapper[4871]: I0128 15:49:26.904492 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:49:26 crc kubenswrapper[4871]: E0128 15:49:26.905287 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:49:30 crc kubenswrapper[4871]: I0128 15:49:30.970143 4871 scope.go:117] "RemoveContainer" containerID="10af46123495183e63bea233d340e710427d656264173106b6b892675db5d904" Jan 28 15:49:40 crc kubenswrapper[4871]: I0128 15:49:40.904325 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:49:40 crc kubenswrapper[4871]: E0128 15:49:40.905270 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:49:53 crc kubenswrapper[4871]: I0128 15:49:53.905380 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:49:53 crc kubenswrapper[4871]: E0128 15:49:53.906695 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:50:06 crc kubenswrapper[4871]: I0128 15:50:06.904556 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:50:06 crc kubenswrapper[4871]: E0128 15:50:06.905813 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:50:21 crc kubenswrapper[4871]: I0128 15:50:21.904577 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:50:21 crc kubenswrapper[4871]: E0128 15:50:21.905426 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:50:34 crc kubenswrapper[4871]: I0128 15:50:34.903481 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:50:34 crc kubenswrapper[4871]: E0128 15:50:34.904179 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:50:49 crc kubenswrapper[4871]: I0128 15:50:49.903823 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:50:50 crc kubenswrapper[4871]: I0128 15:50:50.399651 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerStarted","Data":"102f9d6a8c8eb16b03feb0138985e3dc7c9eddd1199f871e408408308b296fbd"} Jan 28 15:51:03 crc kubenswrapper[4871]: I0128 15:51:03.534923 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4hqk6"] Jan 28 15:51:03 crc kubenswrapper[4871]: E0128 15:51:03.536011 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cabc60-22e8-477e-bc39-0fb426e43c5d" containerName="collect-profiles" Jan 28 15:51:03 crc kubenswrapper[4871]: I0128 15:51:03.536033 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cabc60-22e8-477e-bc39-0fb426e43c5d" containerName="collect-profiles" Jan 28 15:51:03 crc kubenswrapper[4871]: I0128 15:51:03.536323 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="08cabc60-22e8-477e-bc39-0fb426e43c5d" containerName="collect-profiles" Jan 28 15:51:03 crc kubenswrapper[4871]: I0128 15:51:03.538676 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hqk6" Jan 28 15:51:03 crc kubenswrapper[4871]: I0128 15:51:03.547418 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4hqk6"] Jan 28 15:51:03 crc kubenswrapper[4871]: I0128 15:51:03.677230 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bc56629-24ba-4be9-8ec1-1ac434eed1e9-catalog-content\") pod \"redhat-operators-4hqk6\" (UID: \"3bc56629-24ba-4be9-8ec1-1ac434eed1e9\") " pod="openshift-marketplace/redhat-operators-4hqk6" Jan 28 15:51:03 crc kubenswrapper[4871]: I0128 15:51:03.677366 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bc56629-24ba-4be9-8ec1-1ac434eed1e9-utilities\") pod \"redhat-operators-4hqk6\" (UID: \"3bc56629-24ba-4be9-8ec1-1ac434eed1e9\") " pod="openshift-marketplace/redhat-operators-4hqk6" Jan 28 15:51:03 crc kubenswrapper[4871]: I0128 15:51:03.677497 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmhmk\" (UniqueName: \"kubernetes.io/projected/3bc56629-24ba-4be9-8ec1-1ac434eed1e9-kube-api-access-hmhmk\") pod \"redhat-operators-4hqk6\" (UID: \"3bc56629-24ba-4be9-8ec1-1ac434eed1e9\") " pod="openshift-marketplace/redhat-operators-4hqk6" Jan 28 15:51:03 crc kubenswrapper[4871]: I0128 15:51:03.779151 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmhmk\" (UniqueName: \"kubernetes.io/projected/3bc56629-24ba-4be9-8ec1-1ac434eed1e9-kube-api-access-hmhmk\") pod \"redhat-operators-4hqk6\" (UID: \"3bc56629-24ba-4be9-8ec1-1ac434eed1e9\") " pod="openshift-marketplace/redhat-operators-4hqk6" Jan 28 15:51:03 crc kubenswrapper[4871]: I0128 15:51:03.779459 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bc56629-24ba-4be9-8ec1-1ac434eed1e9-catalog-content\") pod \"redhat-operators-4hqk6\" (UID: \"3bc56629-24ba-4be9-8ec1-1ac434eed1e9\") " pod="openshift-marketplace/redhat-operators-4hqk6" Jan 28 15:51:03 crc kubenswrapper[4871]: I0128 15:51:03.779565 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bc56629-24ba-4be9-8ec1-1ac434eed1e9-utilities\") pod \"redhat-operators-4hqk6\" (UID: \"3bc56629-24ba-4be9-8ec1-1ac434eed1e9\") " pod="openshift-marketplace/redhat-operators-4hqk6" Jan 28 15:51:03 crc kubenswrapper[4871]: I0128 15:51:03.780031 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bc56629-24ba-4be9-8ec1-1ac434eed1e9-catalog-content\") pod \"redhat-operators-4hqk6\" (UID: \"3bc56629-24ba-4be9-8ec1-1ac434eed1e9\") " pod="openshift-marketplace/redhat-operators-4hqk6" Jan 28 15:51:03 crc kubenswrapper[4871]: I0128 15:51:03.780173 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bc56629-24ba-4be9-8ec1-1ac434eed1e9-utilities\") pod \"redhat-operators-4hqk6\" (UID: \"3bc56629-24ba-4be9-8ec1-1ac434eed1e9\") " pod="openshift-marketplace/redhat-operators-4hqk6" Jan 28 15:51:03 crc kubenswrapper[4871]: I0128 15:51:03.797710 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmhmk\" (UniqueName: \"kubernetes.io/projected/3bc56629-24ba-4be9-8ec1-1ac434eed1e9-kube-api-access-hmhmk\") pod \"redhat-operators-4hqk6\" (UID: \"3bc56629-24ba-4be9-8ec1-1ac434eed1e9\") " pod="openshift-marketplace/redhat-operators-4hqk6" Jan 28 15:51:03 crc kubenswrapper[4871]: I0128 15:51:03.883913 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hqk6" Jan 28 15:51:04 crc kubenswrapper[4871]: I0128 15:51:04.336381 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4hqk6"] Jan 28 15:51:04 crc kubenswrapper[4871]: I0128 15:51:04.509788 4871 generic.go:334] "Generic (PLEG): container finished" podID="3bc56629-24ba-4be9-8ec1-1ac434eed1e9" containerID="67054e067a4259ec3fd063692b7454163884e709ed575764fe84fcafe900caf2" exitCode=0 Jan 28 15:51:04 crc kubenswrapper[4871]: I0128 15:51:04.509826 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hqk6" event={"ID":"3bc56629-24ba-4be9-8ec1-1ac434eed1e9","Type":"ContainerDied","Data":"67054e067a4259ec3fd063692b7454163884e709ed575764fe84fcafe900caf2"} Jan 28 15:51:04 crc kubenswrapper[4871]: I0128 15:51:04.509851 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hqk6" event={"ID":"3bc56629-24ba-4be9-8ec1-1ac434eed1e9","Type":"ContainerStarted","Data":"ed1b803a1bc42bf75043d115c7bec3643947d3f3a02058e1994e6a66ab030ffc"} Jan 28 15:51:04 crc kubenswrapper[4871]: I0128 15:51:04.511716 4871 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:51:12 crc kubenswrapper[4871]: I0128 15:51:12.578629 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hqk6" event={"ID":"3bc56629-24ba-4be9-8ec1-1ac434eed1e9","Type":"ContainerStarted","Data":"8de59e426bab53e00e7c0d986eb8a5bdfae1e0fcfd776f3243a1b36a95fd6be5"} Jan 28 15:51:15 crc kubenswrapper[4871]: I0128 15:51:15.751415 4871 generic.go:334] "Generic (PLEG): container finished" podID="3bc56629-24ba-4be9-8ec1-1ac434eed1e9" containerID="8de59e426bab53e00e7c0d986eb8a5bdfae1e0fcfd776f3243a1b36a95fd6be5" exitCode=0 Jan 28 15:51:15 crc kubenswrapper[4871]: I0128 15:51:15.751501 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hqk6" event={"ID":"3bc56629-24ba-4be9-8ec1-1ac434eed1e9","Type":"ContainerDied","Data":"8de59e426bab53e00e7c0d986eb8a5bdfae1e0fcfd776f3243a1b36a95fd6be5"} Jan 28 15:51:17 crc kubenswrapper[4871]: I0128 15:51:17.797244 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hqk6" event={"ID":"3bc56629-24ba-4be9-8ec1-1ac434eed1e9","Type":"ContainerStarted","Data":"17b509ee90a212604c390175ed1bafe2a11bdf6411f425718b08c967710601e8"} Jan 28 15:51:17 crc kubenswrapper[4871]: I0128 15:51:17.821652 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4hqk6" podStartSLOduration=2.701619445 podStartE2EDuration="14.821629198s" podCreationTimestamp="2026-01-28 15:51:03 +0000 UTC" firstStartedPulling="2026-01-28 15:51:04.511503591 +0000 UTC m=+2016.407341913" lastFinishedPulling="2026-01-28 15:51:16.631513334 +0000 UTC m=+2028.527351666" observedRunningTime="2026-01-28 15:51:17.813936338 +0000 UTC m=+2029.709774660" watchObservedRunningTime="2026-01-28 15:51:17.821629198 +0000 UTC m=+2029.717467530" Jan 28 15:51:23 crc kubenswrapper[4871]: I0128 15:51:23.885116 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4hqk6" Jan 28 15:51:23 crc kubenswrapper[4871]: I0128 15:51:23.885638 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4hqk6" Jan 28 15:51:23 crc kubenswrapper[4871]: I0128 15:51:23.924230 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4hqk6" Jan 28 15:51:24 crc kubenswrapper[4871]: I0128 15:51:24.921563 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4hqk6" Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.016993 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4hqk6"] Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.059968 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j8zlq"] Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.060316 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j8zlq" podUID="3bc84e74-a5d1-491f-9fec-7400c66214bc" containerName="registry-server" containerID="cri-o://e29844274fa88938aa6c728153144accef87b930b52ed8835b9d3b154ebf1f3e" gracePeriod=2 Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.456655 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j8zlq" Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.626256 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bc84e74-a5d1-491f-9fec-7400c66214bc-utilities\") pod \"3bc84e74-a5d1-491f-9fec-7400c66214bc\" (UID: \"3bc84e74-a5d1-491f-9fec-7400c66214bc\") " Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.626473 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bc84e74-a5d1-491f-9fec-7400c66214bc-catalog-content\") pod \"3bc84e74-a5d1-491f-9fec-7400c66214bc\" (UID: \"3bc84e74-a5d1-491f-9fec-7400c66214bc\") " Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.626512 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlhkg\" (UniqueName: \"kubernetes.io/projected/3bc84e74-a5d1-491f-9fec-7400c66214bc-kube-api-access-rlhkg\") pod \"3bc84e74-a5d1-491f-9fec-7400c66214bc\" (UID: \"3bc84e74-a5d1-491f-9fec-7400c66214bc\") " Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.627027 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bc84e74-a5d1-491f-9fec-7400c66214bc-utilities" (OuterVolumeSpecName: "utilities") pod "3bc84e74-a5d1-491f-9fec-7400c66214bc" (UID: "3bc84e74-a5d1-491f-9fec-7400c66214bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.634380 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bc84e74-a5d1-491f-9fec-7400c66214bc-kube-api-access-rlhkg" (OuterVolumeSpecName: "kube-api-access-rlhkg") pod "3bc84e74-a5d1-491f-9fec-7400c66214bc" (UID: "3bc84e74-a5d1-491f-9fec-7400c66214bc"). InnerVolumeSpecName "kube-api-access-rlhkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.728914 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlhkg\" (UniqueName: \"kubernetes.io/projected/3bc84e74-a5d1-491f-9fec-7400c66214bc-kube-api-access-rlhkg\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.728953 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bc84e74-a5d1-491f-9fec-7400c66214bc-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.737838 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bc84e74-a5d1-491f-9fec-7400c66214bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bc84e74-a5d1-491f-9fec-7400c66214bc" (UID: "3bc84e74-a5d1-491f-9fec-7400c66214bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.829955 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bc84e74-a5d1-491f-9fec-7400c66214bc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.863182 4871 generic.go:334] "Generic (PLEG): container finished" podID="3bc84e74-a5d1-491f-9fec-7400c66214bc" containerID="e29844274fa88938aa6c728153144accef87b930b52ed8835b9d3b154ebf1f3e" exitCode=0 Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.863230 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8zlq" event={"ID":"3bc84e74-a5d1-491f-9fec-7400c66214bc","Type":"ContainerDied","Data":"e29844274fa88938aa6c728153144accef87b930b52ed8835b9d3b154ebf1f3e"} Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.863290 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8zlq" event={"ID":"3bc84e74-a5d1-491f-9fec-7400c66214bc","Type":"ContainerDied","Data":"ba86054939933b6b7991c658da006b3bbe4026508609f3dc7458c6901f66cb57"} Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.863310 4871 scope.go:117] "RemoveContainer" containerID="e29844274fa88938aa6c728153144accef87b930b52ed8835b9d3b154ebf1f3e" Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.863247 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j8zlq" Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.884479 4871 scope.go:117] "RemoveContainer" containerID="2afe34c0bbe534ba52d1760f3e56308ed74df40497e70f8194b71098e9f246ac" Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.898409 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j8zlq"] Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.904305 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j8zlq"] Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.923032 4871 scope.go:117] "RemoveContainer" containerID="4820c1e908d3cc9c4b837de364740995013966647c5604c25e98a0923f963014" Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.952902 4871 scope.go:117] "RemoveContainer" containerID="e29844274fa88938aa6c728153144accef87b930b52ed8835b9d3b154ebf1f3e" Jan 28 15:51:25 crc kubenswrapper[4871]: E0128 15:51:25.953672 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e29844274fa88938aa6c728153144accef87b930b52ed8835b9d3b154ebf1f3e\": container with ID starting with e29844274fa88938aa6c728153144accef87b930b52ed8835b9d3b154ebf1f3e not found: ID does not exist" containerID="e29844274fa88938aa6c728153144accef87b930b52ed8835b9d3b154ebf1f3e" Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.953705 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e29844274fa88938aa6c728153144accef87b930b52ed8835b9d3b154ebf1f3e"} err="failed to get container status \"e29844274fa88938aa6c728153144accef87b930b52ed8835b9d3b154ebf1f3e\": rpc error: code = NotFound desc = could not find container \"e29844274fa88938aa6c728153144accef87b930b52ed8835b9d3b154ebf1f3e\": container with ID starting with e29844274fa88938aa6c728153144accef87b930b52ed8835b9d3b154ebf1f3e not found: ID does not exist" Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.953726 4871 scope.go:117] "RemoveContainer" containerID="2afe34c0bbe534ba52d1760f3e56308ed74df40497e70f8194b71098e9f246ac" Jan 28 15:51:25 crc kubenswrapper[4871]: E0128 15:51:25.954056 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2afe34c0bbe534ba52d1760f3e56308ed74df40497e70f8194b71098e9f246ac\": container with ID starting with 2afe34c0bbe534ba52d1760f3e56308ed74df40497e70f8194b71098e9f246ac not found: ID does not exist" containerID="2afe34c0bbe534ba52d1760f3e56308ed74df40497e70f8194b71098e9f246ac" Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.954102 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2afe34c0bbe534ba52d1760f3e56308ed74df40497e70f8194b71098e9f246ac"} err="failed to get container status \"2afe34c0bbe534ba52d1760f3e56308ed74df40497e70f8194b71098e9f246ac\": rpc error: code = NotFound desc = could not find container \"2afe34c0bbe534ba52d1760f3e56308ed74df40497e70f8194b71098e9f246ac\": container with ID starting with 2afe34c0bbe534ba52d1760f3e56308ed74df40497e70f8194b71098e9f246ac not found: ID does not exist" Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.954138 4871 scope.go:117] "RemoveContainer" containerID="4820c1e908d3cc9c4b837de364740995013966647c5604c25e98a0923f963014" Jan 28 15:51:25 crc kubenswrapper[4871]: E0128 15:51:25.954395 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4820c1e908d3cc9c4b837de364740995013966647c5604c25e98a0923f963014\": container with ID starting with 4820c1e908d3cc9c4b837de364740995013966647c5604c25e98a0923f963014 not found: ID does not exist" containerID="4820c1e908d3cc9c4b837de364740995013966647c5604c25e98a0923f963014" Jan 28 15:51:25 crc kubenswrapper[4871]: I0128 15:51:25.954431 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4820c1e908d3cc9c4b837de364740995013966647c5604c25e98a0923f963014"} err="failed to get container status \"4820c1e908d3cc9c4b837de364740995013966647c5604c25e98a0923f963014\": rpc error: code = NotFound desc = could not find container \"4820c1e908d3cc9c4b837de364740995013966647c5604c25e98a0923f963014\": container with ID starting with 4820c1e908d3cc9c4b837de364740995013966647c5604c25e98a0923f963014 not found: ID does not exist" Jan 28 15:51:26 crc kubenswrapper[4871]: I0128 15:51:26.915059 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bc84e74-a5d1-491f-9fec-7400c66214bc" path="/var/lib/kubelet/pods/3bc84e74-a5d1-491f-9fec-7400c66214bc/volumes" Jan 28 15:52:34 crc kubenswrapper[4871]: I0128 15:52:34.182641 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9q4cm"] Jan 28 15:52:34 crc kubenswrapper[4871]: E0128 15:52:34.183531 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bc84e74-a5d1-491f-9fec-7400c66214bc" containerName="extract-utilities" Jan 28 15:52:34 crc kubenswrapper[4871]: I0128 15:52:34.183548 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bc84e74-a5d1-491f-9fec-7400c66214bc" containerName="extract-utilities" Jan 28 15:52:34 crc kubenswrapper[4871]: E0128 15:52:34.183578 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bc84e74-a5d1-491f-9fec-7400c66214bc" containerName="registry-server" Jan 28 15:52:34 crc kubenswrapper[4871]: I0128 15:52:34.183629 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bc84e74-a5d1-491f-9fec-7400c66214bc" containerName="registry-server" Jan 28 15:52:34 crc kubenswrapper[4871]: E0128 15:52:34.183643 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bc84e74-a5d1-491f-9fec-7400c66214bc" containerName="extract-content" Jan 28 15:52:34 crc kubenswrapper[4871]: I0128 15:52:34.183649 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bc84e74-a5d1-491f-9fec-7400c66214bc" containerName="extract-content" Jan 28 15:52:34 crc kubenswrapper[4871]: I0128 15:52:34.183806 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bc84e74-a5d1-491f-9fec-7400c66214bc" containerName="registry-server" Jan 28 15:52:34 crc kubenswrapper[4871]: I0128 15:52:34.185043 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9q4cm" Jan 28 15:52:34 crc kubenswrapper[4871]: I0128 15:52:34.197405 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9q4cm"] Jan 28 15:52:34 crc kubenswrapper[4871]: I0128 15:52:34.378794 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtt6x\" (UniqueName: \"kubernetes.io/projected/f7e79104-1ea3-44b1-ae2f-345e160d6b2f-kube-api-access-qtt6x\") pod \"redhat-marketplace-9q4cm\" (UID: \"f7e79104-1ea3-44b1-ae2f-345e160d6b2f\") " pod="openshift-marketplace/redhat-marketplace-9q4cm" Jan 28 15:52:34 crc kubenswrapper[4871]: I0128 15:52:34.379118 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7e79104-1ea3-44b1-ae2f-345e160d6b2f-utilities\") pod \"redhat-marketplace-9q4cm\" (UID: \"f7e79104-1ea3-44b1-ae2f-345e160d6b2f\") " pod="openshift-marketplace/redhat-marketplace-9q4cm" Jan 28 15:52:34 crc kubenswrapper[4871]: I0128 15:52:34.379218 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7e79104-1ea3-44b1-ae2f-345e160d6b2f-catalog-content\") pod \"redhat-marketplace-9q4cm\" (UID: \"f7e79104-1ea3-44b1-ae2f-345e160d6b2f\") " pod="openshift-marketplace/redhat-marketplace-9q4cm" Jan 28 15:52:34 crc kubenswrapper[4871]: I0128 15:52:34.480868 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7e79104-1ea3-44b1-ae2f-345e160d6b2f-catalog-content\") pod \"redhat-marketplace-9q4cm\" (UID: \"f7e79104-1ea3-44b1-ae2f-345e160d6b2f\") " pod="openshift-marketplace/redhat-marketplace-9q4cm" Jan 28 15:52:34 crc kubenswrapper[4871]: I0128 15:52:34.480960 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtt6x\" (UniqueName: \"kubernetes.io/projected/f7e79104-1ea3-44b1-ae2f-345e160d6b2f-kube-api-access-qtt6x\") pod \"redhat-marketplace-9q4cm\" (UID: \"f7e79104-1ea3-44b1-ae2f-345e160d6b2f\") " pod="openshift-marketplace/redhat-marketplace-9q4cm" Jan 28 15:52:34 crc kubenswrapper[4871]: I0128 15:52:34.480983 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7e79104-1ea3-44b1-ae2f-345e160d6b2f-utilities\") pod \"redhat-marketplace-9q4cm\" (UID: \"f7e79104-1ea3-44b1-ae2f-345e160d6b2f\") " pod="openshift-marketplace/redhat-marketplace-9q4cm" Jan 28 15:52:34 crc kubenswrapper[4871]: I0128 15:52:34.481321 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7e79104-1ea3-44b1-ae2f-345e160d6b2f-catalog-content\") pod \"redhat-marketplace-9q4cm\" (UID: \"f7e79104-1ea3-44b1-ae2f-345e160d6b2f\") " pod="openshift-marketplace/redhat-marketplace-9q4cm" Jan 28 15:52:34 crc kubenswrapper[4871]: I0128 15:52:34.481340 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7e79104-1ea3-44b1-ae2f-345e160d6b2f-utilities\") pod \"redhat-marketplace-9q4cm\" (UID: \"f7e79104-1ea3-44b1-ae2f-345e160d6b2f\") " pod="openshift-marketplace/redhat-marketplace-9q4cm" Jan 28 15:52:34 crc kubenswrapper[4871]: I0128 15:52:34.506527 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtt6x\" (UniqueName: \"kubernetes.io/projected/f7e79104-1ea3-44b1-ae2f-345e160d6b2f-kube-api-access-qtt6x\") pod \"redhat-marketplace-9q4cm\" (UID: \"f7e79104-1ea3-44b1-ae2f-345e160d6b2f\") " pod="openshift-marketplace/redhat-marketplace-9q4cm" Jan 28 15:52:34 crc kubenswrapper[4871]: I0128 15:52:34.515207 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9q4cm" Jan 28 15:52:35 crc kubenswrapper[4871]: I0128 15:52:35.021607 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9q4cm"] Jan 28 15:52:35 crc kubenswrapper[4871]: I0128 15:52:35.424503 4871 generic.go:334] "Generic (PLEG): container finished" podID="f7e79104-1ea3-44b1-ae2f-345e160d6b2f" containerID="1a386ae2e3c908e253e64de6e5c9a2dcd888b9554de3f7ea14003c1ece514a43" exitCode=0 Jan 28 15:52:35 crc kubenswrapper[4871]: I0128 15:52:35.424718 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9q4cm" event={"ID":"f7e79104-1ea3-44b1-ae2f-345e160d6b2f","Type":"ContainerDied","Data":"1a386ae2e3c908e253e64de6e5c9a2dcd888b9554de3f7ea14003c1ece514a43"} Jan 28 15:52:35 crc kubenswrapper[4871]: I0128 15:52:35.424974 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9q4cm" event={"ID":"f7e79104-1ea3-44b1-ae2f-345e160d6b2f","Type":"ContainerStarted","Data":"cc657dc97c2f8c9d1a6944036671adc85bfd8b4ddf8b78cc84ee9b4ffb99f0fc"} Jan 28 15:52:37 crc kubenswrapper[4871]: I0128 15:52:37.443763 4871 generic.go:334] "Generic (PLEG): container finished" podID="f7e79104-1ea3-44b1-ae2f-345e160d6b2f" containerID="842a51f73aaa64c2d055814afcdbd1d76c4aae059cbbb5211eb69a2c6e795cc2" exitCode=0 Jan 28 15:52:37 crc kubenswrapper[4871]: I0128 15:52:37.443819 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9q4cm" event={"ID":"f7e79104-1ea3-44b1-ae2f-345e160d6b2f","Type":"ContainerDied","Data":"842a51f73aaa64c2d055814afcdbd1d76c4aae059cbbb5211eb69a2c6e795cc2"} Jan 28 15:52:38 crc kubenswrapper[4871]: I0128 15:52:38.455955 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9q4cm" event={"ID":"f7e79104-1ea3-44b1-ae2f-345e160d6b2f","Type":"ContainerStarted","Data":"c28d0e9f56054c1beb72f81a5663755ece0100d154b879b1c3353d1dfc4daaf5"} Jan 28 15:52:44 crc kubenswrapper[4871]: I0128 15:52:44.515669 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9q4cm" Jan 28 15:52:44 crc kubenswrapper[4871]: I0128 15:52:44.517986 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9q4cm" Jan 28 15:52:44 crc kubenswrapper[4871]: I0128 15:52:44.616226 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9q4cm" Jan 28 15:52:44 crc kubenswrapper[4871]: I0128 15:52:44.656865 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9q4cm" podStartSLOduration=8.225593855 podStartE2EDuration="10.656841324s" podCreationTimestamp="2026-01-28 15:52:34 +0000 UTC" firstStartedPulling="2026-01-28 15:52:35.426512804 +0000 UTC m=+2107.322351166" lastFinishedPulling="2026-01-28 15:52:37.857760293 +0000 UTC m=+2109.753598635" observedRunningTime="2026-01-28 15:52:38.477316011 +0000 UTC m=+2110.373154343" watchObservedRunningTime="2026-01-28 15:52:44.656841324 +0000 UTC m=+2116.552679646" Jan 28 15:52:45 crc kubenswrapper[4871]: I0128 15:52:45.570978 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9q4cm" Jan 28 15:52:45 crc kubenswrapper[4871]: I0128 15:52:45.628675 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9q4cm"] Jan 28 15:52:47 crc kubenswrapper[4871]: I0128 15:52:47.532961 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9q4cm" podUID="f7e79104-1ea3-44b1-ae2f-345e160d6b2f" containerName="registry-server" containerID="cri-o://c28d0e9f56054c1beb72f81a5663755ece0100d154b879b1c3353d1dfc4daaf5" gracePeriod=2 Jan 28 15:52:47 crc kubenswrapper[4871]: I0128 15:52:47.951452 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9q4cm" Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.105034 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtt6x\" (UniqueName: \"kubernetes.io/projected/f7e79104-1ea3-44b1-ae2f-345e160d6b2f-kube-api-access-qtt6x\") pod \"f7e79104-1ea3-44b1-ae2f-345e160d6b2f\" (UID: \"f7e79104-1ea3-44b1-ae2f-345e160d6b2f\") " Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.105508 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7e79104-1ea3-44b1-ae2f-345e160d6b2f-utilities\") pod \"f7e79104-1ea3-44b1-ae2f-345e160d6b2f\" (UID: \"f7e79104-1ea3-44b1-ae2f-345e160d6b2f\") " Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.105688 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7e79104-1ea3-44b1-ae2f-345e160d6b2f-catalog-content\") pod \"f7e79104-1ea3-44b1-ae2f-345e160d6b2f\" (UID: \"f7e79104-1ea3-44b1-ae2f-345e160d6b2f\") " Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.106666 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e79104-1ea3-44b1-ae2f-345e160d6b2f-utilities" (OuterVolumeSpecName: "utilities") pod "f7e79104-1ea3-44b1-ae2f-345e160d6b2f" (UID: "f7e79104-1ea3-44b1-ae2f-345e160d6b2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.110701 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e79104-1ea3-44b1-ae2f-345e160d6b2f-kube-api-access-qtt6x" (OuterVolumeSpecName: "kube-api-access-qtt6x") pod "f7e79104-1ea3-44b1-ae2f-345e160d6b2f" (UID: "f7e79104-1ea3-44b1-ae2f-345e160d6b2f"). InnerVolumeSpecName "kube-api-access-qtt6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.147266 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e79104-1ea3-44b1-ae2f-345e160d6b2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7e79104-1ea3-44b1-ae2f-345e160d6b2f" (UID: "f7e79104-1ea3-44b1-ae2f-345e160d6b2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.208378 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtt6x\" (UniqueName: \"kubernetes.io/projected/f7e79104-1ea3-44b1-ae2f-345e160d6b2f-kube-api-access-qtt6x\") on node \"crc\" DevicePath \"\"" Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.208416 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7e79104-1ea3-44b1-ae2f-345e160d6b2f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.208425 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7e79104-1ea3-44b1-ae2f-345e160d6b2f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.541648 4871 generic.go:334] "Generic (PLEG): container finished" podID="f7e79104-1ea3-44b1-ae2f-345e160d6b2f" containerID="c28d0e9f56054c1beb72f81a5663755ece0100d154b879b1c3353d1dfc4daaf5" exitCode=0 Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.541695 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9q4cm" Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.541717 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9q4cm" event={"ID":"f7e79104-1ea3-44b1-ae2f-345e160d6b2f","Type":"ContainerDied","Data":"c28d0e9f56054c1beb72f81a5663755ece0100d154b879b1c3353d1dfc4daaf5"} Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.542100 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9q4cm" event={"ID":"f7e79104-1ea3-44b1-ae2f-345e160d6b2f","Type":"ContainerDied","Data":"cc657dc97c2f8c9d1a6944036671adc85bfd8b4ddf8b78cc84ee9b4ffb99f0fc"} Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.542129 4871 scope.go:117] "RemoveContainer" containerID="c28d0e9f56054c1beb72f81a5663755ece0100d154b879b1c3353d1dfc4daaf5" Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.564912 4871 scope.go:117] "RemoveContainer" containerID="842a51f73aaa64c2d055814afcdbd1d76c4aae059cbbb5211eb69a2c6e795cc2" Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.566886 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9q4cm"] Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.579934 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9q4cm"] Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.581076 4871 scope.go:117] "RemoveContainer" containerID="1a386ae2e3c908e253e64de6e5c9a2dcd888b9554de3f7ea14003c1ece514a43" Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.653558 4871 scope.go:117] "RemoveContainer" containerID="c28d0e9f56054c1beb72f81a5663755ece0100d154b879b1c3353d1dfc4daaf5" Jan 28 15:52:48 crc kubenswrapper[4871]: E0128 15:52:48.653885 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c28d0e9f56054c1beb72f81a5663755ece0100d154b879b1c3353d1dfc4daaf5\": container with ID starting with c28d0e9f56054c1beb72f81a5663755ece0100d154b879b1c3353d1dfc4daaf5 not found: ID does not exist" containerID="c28d0e9f56054c1beb72f81a5663755ece0100d154b879b1c3353d1dfc4daaf5" Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.653912 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28d0e9f56054c1beb72f81a5663755ece0100d154b879b1c3353d1dfc4daaf5"} err="failed to get container status \"c28d0e9f56054c1beb72f81a5663755ece0100d154b879b1c3353d1dfc4daaf5\": rpc error: code = NotFound desc = could not find container \"c28d0e9f56054c1beb72f81a5663755ece0100d154b879b1c3353d1dfc4daaf5\": container with ID starting with c28d0e9f56054c1beb72f81a5663755ece0100d154b879b1c3353d1dfc4daaf5 not found: ID does not exist" Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.653933 4871 scope.go:117] "RemoveContainer" containerID="842a51f73aaa64c2d055814afcdbd1d76c4aae059cbbb5211eb69a2c6e795cc2" Jan 28 15:52:48 crc kubenswrapper[4871]: E0128 15:52:48.654145 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"842a51f73aaa64c2d055814afcdbd1d76c4aae059cbbb5211eb69a2c6e795cc2\": container with ID starting with 842a51f73aaa64c2d055814afcdbd1d76c4aae059cbbb5211eb69a2c6e795cc2 not found: ID does not exist" containerID="842a51f73aaa64c2d055814afcdbd1d76c4aae059cbbb5211eb69a2c6e795cc2" Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.654166 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"842a51f73aaa64c2d055814afcdbd1d76c4aae059cbbb5211eb69a2c6e795cc2"} err="failed to get container status \"842a51f73aaa64c2d055814afcdbd1d76c4aae059cbbb5211eb69a2c6e795cc2\": rpc error: code = NotFound desc = could not find container \"842a51f73aaa64c2d055814afcdbd1d76c4aae059cbbb5211eb69a2c6e795cc2\": container with ID starting with 842a51f73aaa64c2d055814afcdbd1d76c4aae059cbbb5211eb69a2c6e795cc2 not found: ID does not exist" Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.654179 4871 scope.go:117] "RemoveContainer" containerID="1a386ae2e3c908e253e64de6e5c9a2dcd888b9554de3f7ea14003c1ece514a43" Jan 28 15:52:48 crc kubenswrapper[4871]: E0128 15:52:48.654354 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a386ae2e3c908e253e64de6e5c9a2dcd888b9554de3f7ea14003c1ece514a43\": container with ID starting with 1a386ae2e3c908e253e64de6e5c9a2dcd888b9554de3f7ea14003c1ece514a43 not found: ID does not exist" containerID="1a386ae2e3c908e253e64de6e5c9a2dcd888b9554de3f7ea14003c1ece514a43" Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.654373 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a386ae2e3c908e253e64de6e5c9a2dcd888b9554de3f7ea14003c1ece514a43"} err="failed to get container status \"1a386ae2e3c908e253e64de6e5c9a2dcd888b9554de3f7ea14003c1ece514a43\": rpc error: code = NotFound desc = could not find container \"1a386ae2e3c908e253e64de6e5c9a2dcd888b9554de3f7ea14003c1ece514a43\": container with ID starting with 1a386ae2e3c908e253e64de6e5c9a2dcd888b9554de3f7ea14003c1ece514a43 not found: ID does not exist" Jan 28 15:52:48 crc kubenswrapper[4871]: I0128 15:52:48.914632 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e79104-1ea3-44b1-ae2f-345e160d6b2f" path="/var/lib/kubelet/pods/f7e79104-1ea3-44b1-ae2f-345e160d6b2f/volumes" Jan 28 15:53:13 crc kubenswrapper[4871]: I0128 15:53:13.813989 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:53:13 crc kubenswrapper[4871]: I0128 15:53:13.814562 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:53:28 crc kubenswrapper[4871]: I0128 15:53:28.259701 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cpdnl"] Jan 28 15:53:28 crc kubenswrapper[4871]: E0128 15:53:28.260914 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e79104-1ea3-44b1-ae2f-345e160d6b2f" containerName="registry-server" Jan 28 15:53:28 crc kubenswrapper[4871]: I0128 15:53:28.260935 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e79104-1ea3-44b1-ae2f-345e160d6b2f" containerName="registry-server" Jan 28 15:53:28 crc kubenswrapper[4871]: E0128 15:53:28.260960 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e79104-1ea3-44b1-ae2f-345e160d6b2f" containerName="extract-utilities" Jan 28 15:53:28 crc kubenswrapper[4871]: I0128 15:53:28.260970 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e79104-1ea3-44b1-ae2f-345e160d6b2f" containerName="extract-utilities" Jan 28 15:53:28 crc kubenswrapper[4871]: E0128 15:53:28.260993 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e79104-1ea3-44b1-ae2f-345e160d6b2f" containerName="extract-content" Jan 28 15:53:28 crc kubenswrapper[4871]: I0128 15:53:28.261005 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e79104-1ea3-44b1-ae2f-345e160d6b2f" containerName="extract-content" Jan 28 15:53:28 crc kubenswrapper[4871]: I0128 15:53:28.261242 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7e79104-1ea3-44b1-ae2f-345e160d6b2f" containerName="registry-server" Jan 28 15:53:28 crc kubenswrapper[4871]: I0128 15:53:28.263506 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cpdnl" Jan 28 15:53:28 crc kubenswrapper[4871]: I0128 15:53:28.279071 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cpdnl"] Jan 28 15:53:28 crc kubenswrapper[4871]: I0128 15:53:28.279360 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd50e157-b967-41a6-94e4-4b4c6482e592-utilities\") pod \"community-operators-cpdnl\" (UID: \"fd50e157-b967-41a6-94e4-4b4c6482e592\") " pod="openshift-marketplace/community-operators-cpdnl" Jan 28 15:53:28 crc kubenswrapper[4871]: I0128 15:53:28.279435 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztdcj\" (UniqueName: \"kubernetes.io/projected/fd50e157-b967-41a6-94e4-4b4c6482e592-kube-api-access-ztdcj\") pod \"community-operators-cpdnl\" (UID: \"fd50e157-b967-41a6-94e4-4b4c6482e592\") " pod="openshift-marketplace/community-operators-cpdnl" Jan 28 15:53:28 crc kubenswrapper[4871]: I0128 15:53:28.279729 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd50e157-b967-41a6-94e4-4b4c6482e592-catalog-content\") pod \"community-operators-cpdnl\" (UID: \"fd50e157-b967-41a6-94e4-4b4c6482e592\") " pod="openshift-marketplace/community-operators-cpdnl" Jan 28 15:53:28 crc kubenswrapper[4871]: I0128 15:53:28.381075 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztdcj\" (UniqueName: \"kubernetes.io/projected/fd50e157-b967-41a6-94e4-4b4c6482e592-kube-api-access-ztdcj\") pod \"community-operators-cpdnl\" (UID: \"fd50e157-b967-41a6-94e4-4b4c6482e592\") " pod="openshift-marketplace/community-operators-cpdnl" Jan 28 15:53:28 crc kubenswrapper[4871]: I0128 15:53:28.381219 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd50e157-b967-41a6-94e4-4b4c6482e592-catalog-content\") pod \"community-operators-cpdnl\" (UID: \"fd50e157-b967-41a6-94e4-4b4c6482e592\") " pod="openshift-marketplace/community-operators-cpdnl" Jan 28 15:53:28 crc kubenswrapper[4871]: I0128 15:53:28.381252 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd50e157-b967-41a6-94e4-4b4c6482e592-utilities\") pod \"community-operators-cpdnl\" (UID: \"fd50e157-b967-41a6-94e4-4b4c6482e592\") " pod="openshift-marketplace/community-operators-cpdnl" Jan 28 15:53:28 crc kubenswrapper[4871]: I0128 15:53:28.381677 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd50e157-b967-41a6-94e4-4b4c6482e592-utilities\") pod \"community-operators-cpdnl\" (UID: \"fd50e157-b967-41a6-94e4-4b4c6482e592\") " pod="openshift-marketplace/community-operators-cpdnl" Jan 28 15:53:28 crc kubenswrapper[4871]: I0128 15:53:28.381804 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd50e157-b967-41a6-94e4-4b4c6482e592-catalog-content\") pod \"community-operators-cpdnl\" (UID: \"fd50e157-b967-41a6-94e4-4b4c6482e592\") " pod="openshift-marketplace/community-operators-cpdnl" Jan 28 15:53:28 crc kubenswrapper[4871]: I0128 15:53:28.401738 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztdcj\" (UniqueName: \"kubernetes.io/projected/fd50e157-b967-41a6-94e4-4b4c6482e592-kube-api-access-ztdcj\") pod \"community-operators-cpdnl\" (UID: \"fd50e157-b967-41a6-94e4-4b4c6482e592\") " pod="openshift-marketplace/community-operators-cpdnl" Jan 28 15:53:28 crc kubenswrapper[4871]: I0128 15:53:28.605940 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cpdnl" Jan 28 15:53:29 crc kubenswrapper[4871]: I0128 15:53:29.163318 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cpdnl"] Jan 28 15:53:29 crc kubenswrapper[4871]: I0128 15:53:29.889223 4871 generic.go:334] "Generic (PLEG): container finished" podID="fd50e157-b967-41a6-94e4-4b4c6482e592" containerID="e70c6e85815986ffd8c9a5b6e77c811d802f8352800d34f8e41ecfef296b8dcf" exitCode=0 Jan 28 15:53:29 crc kubenswrapper[4871]: I0128 15:53:29.889307 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpdnl" event={"ID":"fd50e157-b967-41a6-94e4-4b4c6482e592","Type":"ContainerDied","Data":"e70c6e85815986ffd8c9a5b6e77c811d802f8352800d34f8e41ecfef296b8dcf"} Jan 28 15:53:29 crc kubenswrapper[4871]: I0128 15:53:29.889529 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpdnl" event={"ID":"fd50e157-b967-41a6-94e4-4b4c6482e592","Type":"ContainerStarted","Data":"c0388d0bdbef14cb728b2affde319ca0fe1f47abc7a0de73da2e8de0d1569817"} Jan 28 15:53:30 crc kubenswrapper[4871]: I0128 15:53:30.897971 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpdnl" event={"ID":"fd50e157-b967-41a6-94e4-4b4c6482e592","Type":"ContainerStarted","Data":"84b29ccb291355065f4b725399f40029062116b2ca02c558328953b69fd9d482"} Jan 28 15:53:31 crc kubenswrapper[4871]: I0128 15:53:31.911783 4871 generic.go:334] "Generic (PLEG): container finished" podID="fd50e157-b967-41a6-94e4-4b4c6482e592" containerID="84b29ccb291355065f4b725399f40029062116b2ca02c558328953b69fd9d482" exitCode=0 Jan 28 15:53:31 crc kubenswrapper[4871]: I0128 15:53:31.911852 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpdnl" event={"ID":"fd50e157-b967-41a6-94e4-4b4c6482e592","Type":"ContainerDied","Data":"84b29ccb291355065f4b725399f40029062116b2ca02c558328953b69fd9d482"} Jan 28 15:53:32 crc kubenswrapper[4871]: I0128 15:53:32.926738 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpdnl" event={"ID":"fd50e157-b967-41a6-94e4-4b4c6482e592","Type":"ContainerStarted","Data":"b2f5324fe222c825f15863e3eb503324e6e5f14cbb7f7fea123277206971b9c7"} Jan 28 15:53:32 crc kubenswrapper[4871]: I0128 15:53:32.950247 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cpdnl" podStartSLOduration=2.367157968 podStartE2EDuration="4.950226795s" podCreationTimestamp="2026-01-28 15:53:28 +0000 UTC" firstStartedPulling="2026-01-28 15:53:29.89219589 +0000 UTC m=+2161.788034222" lastFinishedPulling="2026-01-28 15:53:32.475264697 +0000 UTC m=+2164.371103049" observedRunningTime="2026-01-28 15:53:32.946399654 +0000 UTC m=+2164.842237976" watchObservedRunningTime="2026-01-28 15:53:32.950226795 +0000 UTC m=+2164.846065117" Jan 28 15:53:38 crc kubenswrapper[4871]: I0128 15:53:38.607057 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cpdnl" Jan 28 15:53:38 crc kubenswrapper[4871]: I0128 15:53:38.607803 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cpdnl" Jan 28 15:53:38 crc kubenswrapper[4871]: I0128 15:53:38.665123 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cpdnl" Jan 28 15:53:39 crc kubenswrapper[4871]: I0128 15:53:39.046972 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cpdnl" Jan 28 15:53:39 crc kubenswrapper[4871]: I0128 15:53:39.105906 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cpdnl"] Jan 28 15:53:41 crc kubenswrapper[4871]: I0128 15:53:41.003493 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cpdnl" podUID="fd50e157-b967-41a6-94e4-4b4c6482e592" containerName="registry-server" containerID="cri-o://b2f5324fe222c825f15863e3eb503324e6e5f14cbb7f7fea123277206971b9c7" gracePeriod=2 Jan 28 15:53:41 crc kubenswrapper[4871]: I0128 15:53:41.432340 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cpdnl" Jan 28 15:53:41 crc kubenswrapper[4871]: I0128 15:53:41.612240 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd50e157-b967-41a6-94e4-4b4c6482e592-utilities\") pod \"fd50e157-b967-41a6-94e4-4b4c6482e592\" (UID: \"fd50e157-b967-41a6-94e4-4b4c6482e592\") " Jan 28 15:53:41 crc kubenswrapper[4871]: I0128 15:53:41.612318 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztdcj\" (UniqueName: \"kubernetes.io/projected/fd50e157-b967-41a6-94e4-4b4c6482e592-kube-api-access-ztdcj\") pod \"fd50e157-b967-41a6-94e4-4b4c6482e592\" (UID: \"fd50e157-b967-41a6-94e4-4b4c6482e592\") " Jan 28 15:53:41 crc kubenswrapper[4871]: I0128 15:53:41.612355 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd50e157-b967-41a6-94e4-4b4c6482e592-catalog-content\") pod \"fd50e157-b967-41a6-94e4-4b4c6482e592\" (UID: \"fd50e157-b967-41a6-94e4-4b4c6482e592\") " Jan 28 15:53:41 crc kubenswrapper[4871]: I0128 15:53:41.613503 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd50e157-b967-41a6-94e4-4b4c6482e592-utilities" (OuterVolumeSpecName: "utilities") pod "fd50e157-b967-41a6-94e4-4b4c6482e592" (UID: "fd50e157-b967-41a6-94e4-4b4c6482e592"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:53:41 crc kubenswrapper[4871]: I0128 15:53:41.620209 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd50e157-b967-41a6-94e4-4b4c6482e592-kube-api-access-ztdcj" (OuterVolumeSpecName: "kube-api-access-ztdcj") pod "fd50e157-b967-41a6-94e4-4b4c6482e592" (UID: "fd50e157-b967-41a6-94e4-4b4c6482e592"). InnerVolumeSpecName "kube-api-access-ztdcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:53:41 crc kubenswrapper[4871]: I0128 15:53:41.688401 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd50e157-b967-41a6-94e4-4b4c6482e592-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd50e157-b967-41a6-94e4-4b4c6482e592" (UID: "fd50e157-b967-41a6-94e4-4b4c6482e592"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:53:41 crc kubenswrapper[4871]: I0128 15:53:41.714347 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd50e157-b967-41a6-94e4-4b4c6482e592-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:53:41 crc kubenswrapper[4871]: I0128 15:53:41.714398 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztdcj\" (UniqueName: \"kubernetes.io/projected/fd50e157-b967-41a6-94e4-4b4c6482e592-kube-api-access-ztdcj\") on node \"crc\" DevicePath \"\"" Jan 28 15:53:41 crc kubenswrapper[4871]: I0128 15:53:41.714413 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd50e157-b967-41a6-94e4-4b4c6482e592-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:53:42 crc kubenswrapper[4871]: I0128 15:53:42.012398 4871 generic.go:334] "Generic (PLEG): container finished" podID="fd50e157-b967-41a6-94e4-4b4c6482e592" containerID="b2f5324fe222c825f15863e3eb503324e6e5f14cbb7f7fea123277206971b9c7" exitCode=0 Jan 28 15:53:42 crc kubenswrapper[4871]: I0128 15:53:42.012441 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpdnl" event={"ID":"fd50e157-b967-41a6-94e4-4b4c6482e592","Type":"ContainerDied","Data":"b2f5324fe222c825f15863e3eb503324e6e5f14cbb7f7fea123277206971b9c7"} Jan 28 15:53:42 crc kubenswrapper[4871]: I0128 15:53:42.012466 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cpdnl" Jan 28 15:53:42 crc kubenswrapper[4871]: I0128 15:53:42.012477 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpdnl" event={"ID":"fd50e157-b967-41a6-94e4-4b4c6482e592","Type":"ContainerDied","Data":"c0388d0bdbef14cb728b2affde319ca0fe1f47abc7a0de73da2e8de0d1569817"} Jan 28 15:53:42 crc kubenswrapper[4871]: I0128 15:53:42.012494 4871 scope.go:117] "RemoveContainer" containerID="b2f5324fe222c825f15863e3eb503324e6e5f14cbb7f7fea123277206971b9c7" Jan 28 15:53:42 crc kubenswrapper[4871]: I0128 15:53:42.034998 4871 scope.go:117] "RemoveContainer" containerID="84b29ccb291355065f4b725399f40029062116b2ca02c558328953b69fd9d482" Jan 28 15:53:42 crc kubenswrapper[4871]: I0128 15:53:42.068220 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cpdnl"] Jan 28 15:53:42 crc kubenswrapper[4871]: I0128 15:53:42.075606 4871 scope.go:117] "RemoveContainer" containerID="e70c6e85815986ffd8c9a5b6e77c811d802f8352800d34f8e41ecfef296b8dcf" Jan 28 15:53:42 crc kubenswrapper[4871]: I0128 15:53:42.076569 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cpdnl"] Jan 28 15:53:42 crc kubenswrapper[4871]: I0128 15:53:42.106521 4871 scope.go:117] "RemoveContainer" containerID="b2f5324fe222c825f15863e3eb503324e6e5f14cbb7f7fea123277206971b9c7" Jan 28 15:53:42 crc kubenswrapper[4871]: E0128 15:53:42.107021 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2f5324fe222c825f15863e3eb503324e6e5f14cbb7f7fea123277206971b9c7\": container with ID starting with b2f5324fe222c825f15863e3eb503324e6e5f14cbb7f7fea123277206971b9c7 not found: ID does not exist" containerID="b2f5324fe222c825f15863e3eb503324e6e5f14cbb7f7fea123277206971b9c7" Jan 28 15:53:42 crc kubenswrapper[4871]: I0128 15:53:42.107061 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2f5324fe222c825f15863e3eb503324e6e5f14cbb7f7fea123277206971b9c7"} err="failed to get container status \"b2f5324fe222c825f15863e3eb503324e6e5f14cbb7f7fea123277206971b9c7\": rpc error: code = NotFound desc = could not find container \"b2f5324fe222c825f15863e3eb503324e6e5f14cbb7f7fea123277206971b9c7\": container with ID starting with b2f5324fe222c825f15863e3eb503324e6e5f14cbb7f7fea123277206971b9c7 not found: ID does not exist" Jan 28 15:53:42 crc kubenswrapper[4871]: I0128 15:53:42.107107 4871 scope.go:117] "RemoveContainer" containerID="84b29ccb291355065f4b725399f40029062116b2ca02c558328953b69fd9d482" Jan 28 15:53:42 crc kubenswrapper[4871]: E0128 15:53:42.107402 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84b29ccb291355065f4b725399f40029062116b2ca02c558328953b69fd9d482\": container with ID starting with 84b29ccb291355065f4b725399f40029062116b2ca02c558328953b69fd9d482 not found: ID does not exist" containerID="84b29ccb291355065f4b725399f40029062116b2ca02c558328953b69fd9d482" Jan 28 15:53:42 crc kubenswrapper[4871]: I0128 15:53:42.107438 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b29ccb291355065f4b725399f40029062116b2ca02c558328953b69fd9d482"} err="failed to get container status \"84b29ccb291355065f4b725399f40029062116b2ca02c558328953b69fd9d482\": rpc error: code = NotFound desc = could not find container \"84b29ccb291355065f4b725399f40029062116b2ca02c558328953b69fd9d482\": container with ID starting with 84b29ccb291355065f4b725399f40029062116b2ca02c558328953b69fd9d482 not found: ID does not exist" Jan 28 15:53:42 crc kubenswrapper[4871]: I0128 15:53:42.107452 4871 scope.go:117] "RemoveContainer" containerID="e70c6e85815986ffd8c9a5b6e77c811d802f8352800d34f8e41ecfef296b8dcf" Jan 28 15:53:42 crc kubenswrapper[4871]: E0128 15:53:42.107724 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e70c6e85815986ffd8c9a5b6e77c811d802f8352800d34f8e41ecfef296b8dcf\": container with ID starting with e70c6e85815986ffd8c9a5b6e77c811d802f8352800d34f8e41ecfef296b8dcf not found: ID does not exist" containerID="e70c6e85815986ffd8c9a5b6e77c811d802f8352800d34f8e41ecfef296b8dcf" Jan 28 15:53:42 crc kubenswrapper[4871]: I0128 15:53:42.107756 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e70c6e85815986ffd8c9a5b6e77c811d802f8352800d34f8e41ecfef296b8dcf"} err="failed to get container status \"e70c6e85815986ffd8c9a5b6e77c811d802f8352800d34f8e41ecfef296b8dcf\": rpc error: code = NotFound desc = could not find container \"e70c6e85815986ffd8c9a5b6e77c811d802f8352800d34f8e41ecfef296b8dcf\": container with ID starting with e70c6e85815986ffd8c9a5b6e77c811d802f8352800d34f8e41ecfef296b8dcf not found: ID does not exist" Jan 28 15:53:42 crc kubenswrapper[4871]: I0128 15:53:42.915812 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd50e157-b967-41a6-94e4-4b4c6482e592" path="/var/lib/kubelet/pods/fd50e157-b967-41a6-94e4-4b4c6482e592/volumes" Jan 28 15:53:43 crc kubenswrapper[4871]: I0128 15:53:43.813727 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:53:43 crc kubenswrapper[4871]: I0128 15:53:43.813787 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:53:59 crc kubenswrapper[4871]: I0128 15:53:59.154970 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lssmh"] Jan 28 15:53:59 crc kubenswrapper[4871]: E0128 15:53:59.156559 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd50e157-b967-41a6-94e4-4b4c6482e592" containerName="extract-utilities" Jan 28 15:53:59 crc kubenswrapper[4871]: I0128 15:53:59.156578 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd50e157-b967-41a6-94e4-4b4c6482e592" containerName="extract-utilities" Jan 28 15:53:59 crc kubenswrapper[4871]: E0128 15:53:59.156608 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd50e157-b967-41a6-94e4-4b4c6482e592" containerName="extract-content" Jan 28 15:53:59 crc kubenswrapper[4871]: I0128 15:53:59.156617 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd50e157-b967-41a6-94e4-4b4c6482e592" containerName="extract-content" Jan 28 15:53:59 crc kubenswrapper[4871]: E0128 15:53:59.156650 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd50e157-b967-41a6-94e4-4b4c6482e592" containerName="registry-server" Jan 28 15:53:59 crc kubenswrapper[4871]: I0128 15:53:59.156659 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd50e157-b967-41a6-94e4-4b4c6482e592" containerName="registry-server" Jan 28 15:53:59 crc kubenswrapper[4871]: I0128 15:53:59.156918 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd50e157-b967-41a6-94e4-4b4c6482e592" containerName="registry-server" Jan 28 15:53:59 crc kubenswrapper[4871]: I0128 15:53:59.158579 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lssmh" Jan 28 15:53:59 crc kubenswrapper[4871]: I0128 15:53:59.194257 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lssmh"] Jan 28 15:53:59 crc kubenswrapper[4871]: I0128 15:53:59.319211 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af6de6b9-774a-4453-bc72-d19dc9cfe7d2-utilities\") pod \"certified-operators-lssmh\" (UID: \"af6de6b9-774a-4453-bc72-d19dc9cfe7d2\") " pod="openshift-marketplace/certified-operators-lssmh" Jan 28 15:53:59 crc kubenswrapper[4871]: I0128 15:53:59.319536 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn7mn\" (UniqueName: \"kubernetes.io/projected/af6de6b9-774a-4453-bc72-d19dc9cfe7d2-kube-api-access-fn7mn\") pod \"certified-operators-lssmh\" (UID: \"af6de6b9-774a-4453-bc72-d19dc9cfe7d2\") " pod="openshift-marketplace/certified-operators-lssmh" Jan 28 15:53:59 crc kubenswrapper[4871]: I0128 15:53:59.319731 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af6de6b9-774a-4453-bc72-d19dc9cfe7d2-catalog-content\") pod \"certified-operators-lssmh\" (UID: \"af6de6b9-774a-4453-bc72-d19dc9cfe7d2\") " pod="openshift-marketplace/certified-operators-lssmh" Jan 28 15:53:59 crc kubenswrapper[4871]: I0128 15:53:59.420976 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af6de6b9-774a-4453-bc72-d19dc9cfe7d2-utilities\") pod \"certified-operators-lssmh\" (UID: \"af6de6b9-774a-4453-bc72-d19dc9cfe7d2\") " pod="openshift-marketplace/certified-operators-lssmh" Jan 28 15:53:59 crc kubenswrapper[4871]: I0128 15:53:59.421346 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7mn\" (UniqueName: \"kubernetes.io/projected/af6de6b9-774a-4453-bc72-d19dc9cfe7d2-kube-api-access-fn7mn\") pod \"certified-operators-lssmh\" (UID: \"af6de6b9-774a-4453-bc72-d19dc9cfe7d2\") " pod="openshift-marketplace/certified-operators-lssmh" Jan 28 15:53:59 crc kubenswrapper[4871]: I0128 15:53:59.421476 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af6de6b9-774a-4453-bc72-d19dc9cfe7d2-catalog-content\") pod \"certified-operators-lssmh\" (UID: \"af6de6b9-774a-4453-bc72-d19dc9cfe7d2\") " pod="openshift-marketplace/certified-operators-lssmh" Jan 28 15:53:59 crc kubenswrapper[4871]: I0128 15:53:59.421661 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af6de6b9-774a-4453-bc72-d19dc9cfe7d2-utilities\") pod \"certified-operators-lssmh\" (UID: \"af6de6b9-774a-4453-bc72-d19dc9cfe7d2\") " pod="openshift-marketplace/certified-operators-lssmh" Jan 28 15:53:59 crc kubenswrapper[4871]: I0128 15:53:59.421987 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af6de6b9-774a-4453-bc72-d19dc9cfe7d2-catalog-content\") pod \"certified-operators-lssmh\" (UID: \"af6de6b9-774a-4453-bc72-d19dc9cfe7d2\") " pod="openshift-marketplace/certified-operators-lssmh" Jan 28 15:53:59 crc kubenswrapper[4871]: I0128 15:53:59.463132 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn7mn\" (UniqueName: \"kubernetes.io/projected/af6de6b9-774a-4453-bc72-d19dc9cfe7d2-kube-api-access-fn7mn\") pod \"certified-operators-lssmh\" (UID: \"af6de6b9-774a-4453-bc72-d19dc9cfe7d2\") " pod="openshift-marketplace/certified-operators-lssmh" Jan 28 15:53:59 crc kubenswrapper[4871]: I0128 15:53:59.507137 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lssmh" Jan 28 15:53:59 crc kubenswrapper[4871]: I0128 15:53:59.979266 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lssmh"] Jan 28 15:54:00 crc kubenswrapper[4871]: I0128 15:54:00.171844 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lssmh" event={"ID":"af6de6b9-774a-4453-bc72-d19dc9cfe7d2","Type":"ContainerStarted","Data":"e16accc9a4827c98e6a0cffc89ebda381b5ab6e2a277982a9dd1470b52ffdd53"} Jan 28 15:54:00 crc kubenswrapper[4871]: I0128 15:54:00.172117 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lssmh" event={"ID":"af6de6b9-774a-4453-bc72-d19dc9cfe7d2","Type":"ContainerStarted","Data":"663096afce49fd3073c8ac7e851c2e0de9628e82bdf417e644de40cf5cbfebe5"} Jan 28 15:54:01 crc kubenswrapper[4871]: I0128 15:54:01.184359 4871 generic.go:334] "Generic (PLEG): container finished" podID="af6de6b9-774a-4453-bc72-d19dc9cfe7d2" containerID="e16accc9a4827c98e6a0cffc89ebda381b5ab6e2a277982a9dd1470b52ffdd53" exitCode=0 Jan 28 15:54:01 crc kubenswrapper[4871]: I0128 15:54:01.184435 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lssmh" event={"ID":"af6de6b9-774a-4453-bc72-d19dc9cfe7d2","Type":"ContainerDied","Data":"e16accc9a4827c98e6a0cffc89ebda381b5ab6e2a277982a9dd1470b52ffdd53"} Jan 28 15:54:02 crc kubenswrapper[4871]: I0128 15:54:02.195534 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lssmh" event={"ID":"af6de6b9-774a-4453-bc72-d19dc9cfe7d2","Type":"ContainerStarted","Data":"c1f695bdd36bc024d60a3be1a8e5d016991273e873f9cf094e111f2fc3319bc4"} Jan 28 15:54:03 crc kubenswrapper[4871]: I0128 15:54:03.208686 4871 generic.go:334] "Generic (PLEG): container finished" podID="af6de6b9-774a-4453-bc72-d19dc9cfe7d2" containerID="c1f695bdd36bc024d60a3be1a8e5d016991273e873f9cf094e111f2fc3319bc4" exitCode=0 Jan 28 15:54:03 crc kubenswrapper[4871]: I0128 15:54:03.209052 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lssmh" event={"ID":"af6de6b9-774a-4453-bc72-d19dc9cfe7d2","Type":"ContainerDied","Data":"c1f695bdd36bc024d60a3be1a8e5d016991273e873f9cf094e111f2fc3319bc4"} Jan 28 15:54:04 crc kubenswrapper[4871]: I0128 15:54:04.221031 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lssmh" event={"ID":"af6de6b9-774a-4453-bc72-d19dc9cfe7d2","Type":"ContainerStarted","Data":"aac12414a4fb6e30dc6541ddcd926263d793adc5b199a86e0b4d90927f9e8492"} Jan 28 15:54:04 crc kubenswrapper[4871]: I0128 15:54:04.242520 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lssmh" podStartSLOduration=2.779253257 podStartE2EDuration="5.242491352s" podCreationTimestamp="2026-01-28 15:53:59 +0000 UTC" firstStartedPulling="2026-01-28 15:54:01.187757951 +0000 UTC m=+2193.083596303" lastFinishedPulling="2026-01-28 15:54:03.650996076 +0000 UTC m=+2195.546834398" observedRunningTime="2026-01-28 15:54:04.239011664 +0000 UTC m=+2196.134850026" watchObservedRunningTime="2026-01-28 15:54:04.242491352 +0000 UTC m=+2196.138329694" Jan 28 15:54:09 crc kubenswrapper[4871]: I0128 15:54:09.507425 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lssmh" Jan 28 15:54:09 crc kubenswrapper[4871]: I0128 15:54:09.507864 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lssmh" Jan 28 15:54:09 crc kubenswrapper[4871]: I0128 15:54:09.550054 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lssmh" Jan 28 15:54:10 crc kubenswrapper[4871]: I0128 15:54:10.350415 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lssmh" Jan 28 15:54:10 crc kubenswrapper[4871]: I0128 15:54:10.405826 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lssmh"] Jan 28 15:54:12 crc kubenswrapper[4871]: I0128 15:54:12.300337 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lssmh" podUID="af6de6b9-774a-4453-bc72-d19dc9cfe7d2" containerName="registry-server" containerID="cri-o://aac12414a4fb6e30dc6541ddcd926263d793adc5b199a86e0b4d90927f9e8492" gracePeriod=2 Jan 28 15:54:12 crc kubenswrapper[4871]: I0128 15:54:12.747184 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lssmh" Jan 28 15:54:12 crc kubenswrapper[4871]: I0128 15:54:12.858374 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af6de6b9-774a-4453-bc72-d19dc9cfe7d2-utilities\") pod \"af6de6b9-774a-4453-bc72-d19dc9cfe7d2\" (UID: \"af6de6b9-774a-4453-bc72-d19dc9cfe7d2\") " Jan 28 15:54:12 crc kubenswrapper[4871]: I0128 15:54:12.858500 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn7mn\" (UniqueName: \"kubernetes.io/projected/af6de6b9-774a-4453-bc72-d19dc9cfe7d2-kube-api-access-fn7mn\") pod \"af6de6b9-774a-4453-bc72-d19dc9cfe7d2\" (UID: \"af6de6b9-774a-4453-bc72-d19dc9cfe7d2\") " Jan 28 15:54:12 crc kubenswrapper[4871]: I0128 15:54:12.858616 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af6de6b9-774a-4453-bc72-d19dc9cfe7d2-catalog-content\") pod \"af6de6b9-774a-4453-bc72-d19dc9cfe7d2\" (UID: \"af6de6b9-774a-4453-bc72-d19dc9cfe7d2\") " Jan 28 15:54:12 crc kubenswrapper[4871]: I0128 15:54:12.860239 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af6de6b9-774a-4453-bc72-d19dc9cfe7d2-utilities" (OuterVolumeSpecName: "utilities") pod "af6de6b9-774a-4453-bc72-d19dc9cfe7d2" (UID: "af6de6b9-774a-4453-bc72-d19dc9cfe7d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:54:12 crc kubenswrapper[4871]: I0128 15:54:12.868692 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6de6b9-774a-4453-bc72-d19dc9cfe7d2-kube-api-access-fn7mn" (OuterVolumeSpecName: "kube-api-access-fn7mn") pod "af6de6b9-774a-4453-bc72-d19dc9cfe7d2" (UID: "af6de6b9-774a-4453-bc72-d19dc9cfe7d2"). InnerVolumeSpecName "kube-api-access-fn7mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:54:12 crc kubenswrapper[4871]: I0128 15:54:12.961546 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af6de6b9-774a-4453-bc72-d19dc9cfe7d2-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:54:12 crc kubenswrapper[4871]: I0128 15:54:12.961618 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn7mn\" (UniqueName: \"kubernetes.io/projected/af6de6b9-774a-4453-bc72-d19dc9cfe7d2-kube-api-access-fn7mn\") on node \"crc\" DevicePath \"\"" Jan 28 15:54:13 crc kubenswrapper[4871]: I0128 15:54:13.311507 4871 generic.go:334] "Generic (PLEG): container finished" podID="af6de6b9-774a-4453-bc72-d19dc9cfe7d2" containerID="aac12414a4fb6e30dc6541ddcd926263d793adc5b199a86e0b4d90927f9e8492" exitCode=0 Jan 28 15:54:13 crc kubenswrapper[4871]: I0128 15:54:13.311568 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lssmh" event={"ID":"af6de6b9-774a-4453-bc72-d19dc9cfe7d2","Type":"ContainerDied","Data":"aac12414a4fb6e30dc6541ddcd926263d793adc5b199a86e0b4d90927f9e8492"} Jan 28 15:54:13 crc kubenswrapper[4871]: I0128 15:54:13.311645 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lssmh" event={"ID":"af6de6b9-774a-4453-bc72-d19dc9cfe7d2","Type":"ContainerDied","Data":"663096afce49fd3073c8ac7e851c2e0de9628e82bdf417e644de40cf5cbfebe5"} Jan 28 15:54:13 crc kubenswrapper[4871]: I0128 15:54:13.311679 4871 scope.go:117] "RemoveContainer" containerID="aac12414a4fb6e30dc6541ddcd926263d793adc5b199a86e0b4d90927f9e8492" Jan 28 15:54:13 crc kubenswrapper[4871]: I0128 15:54:13.312689 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lssmh" Jan 28 15:54:13 crc kubenswrapper[4871]: I0128 15:54:13.335636 4871 scope.go:117] "RemoveContainer" containerID="c1f695bdd36bc024d60a3be1a8e5d016991273e873f9cf094e111f2fc3319bc4" Jan 28 15:54:13 crc kubenswrapper[4871]: I0128 15:54:13.366273 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af6de6b9-774a-4453-bc72-d19dc9cfe7d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af6de6b9-774a-4453-bc72-d19dc9cfe7d2" (UID: "af6de6b9-774a-4453-bc72-d19dc9cfe7d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:54:13 crc kubenswrapper[4871]: I0128 15:54:13.368434 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af6de6b9-774a-4453-bc72-d19dc9cfe7d2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:54:13 crc kubenswrapper[4871]: I0128 15:54:13.371346 4871 scope.go:117] "RemoveContainer" containerID="e16accc9a4827c98e6a0cffc89ebda381b5ab6e2a277982a9dd1470b52ffdd53" Jan 28 15:54:13 crc kubenswrapper[4871]: I0128 15:54:13.405390 4871 scope.go:117] "RemoveContainer" containerID="aac12414a4fb6e30dc6541ddcd926263d793adc5b199a86e0b4d90927f9e8492" Jan 28 15:54:13 crc kubenswrapper[4871]: E0128 15:54:13.405905 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aac12414a4fb6e30dc6541ddcd926263d793adc5b199a86e0b4d90927f9e8492\": container with ID starting with aac12414a4fb6e30dc6541ddcd926263d793adc5b199a86e0b4d90927f9e8492 not found: ID does not exist" containerID="aac12414a4fb6e30dc6541ddcd926263d793adc5b199a86e0b4d90927f9e8492" Jan 28 15:54:13 crc kubenswrapper[4871]: I0128 15:54:13.405946 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aac12414a4fb6e30dc6541ddcd926263d793adc5b199a86e0b4d90927f9e8492"} err="failed to get container status \"aac12414a4fb6e30dc6541ddcd926263d793adc5b199a86e0b4d90927f9e8492\": rpc error: code = NotFound desc = could not find container \"aac12414a4fb6e30dc6541ddcd926263d793adc5b199a86e0b4d90927f9e8492\": container with ID starting with aac12414a4fb6e30dc6541ddcd926263d793adc5b199a86e0b4d90927f9e8492 not found: ID does not exist" Jan 28 15:54:13 crc kubenswrapper[4871]: I0128 15:54:13.405972 4871 scope.go:117] "RemoveContainer" containerID="c1f695bdd36bc024d60a3be1a8e5d016991273e873f9cf094e111f2fc3319bc4" Jan 28 15:54:13 crc kubenswrapper[4871]: E0128 15:54:13.406274 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1f695bdd36bc024d60a3be1a8e5d016991273e873f9cf094e111f2fc3319bc4\": container with ID starting with c1f695bdd36bc024d60a3be1a8e5d016991273e873f9cf094e111f2fc3319bc4 not found: ID does not exist" containerID="c1f695bdd36bc024d60a3be1a8e5d016991273e873f9cf094e111f2fc3319bc4" Jan 28 15:54:13 crc kubenswrapper[4871]: I0128 15:54:13.406303 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1f695bdd36bc024d60a3be1a8e5d016991273e873f9cf094e111f2fc3319bc4"} err="failed to get container status \"c1f695bdd36bc024d60a3be1a8e5d016991273e873f9cf094e111f2fc3319bc4\": rpc error: code = NotFound desc = could not find container \"c1f695bdd36bc024d60a3be1a8e5d016991273e873f9cf094e111f2fc3319bc4\": container with ID starting with c1f695bdd36bc024d60a3be1a8e5d016991273e873f9cf094e111f2fc3319bc4 not found: ID does not exist" Jan 28 15:54:13 crc kubenswrapper[4871]: I0128 15:54:13.406320 4871 scope.go:117] "RemoveContainer" containerID="e16accc9a4827c98e6a0cffc89ebda381b5ab6e2a277982a9dd1470b52ffdd53" Jan 28 15:54:13 crc kubenswrapper[4871]: E0128 15:54:13.406581 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e16accc9a4827c98e6a0cffc89ebda381b5ab6e2a277982a9dd1470b52ffdd53\": container with ID starting with e16accc9a4827c98e6a0cffc89ebda381b5ab6e2a277982a9dd1470b52ffdd53 not found: ID does not exist" containerID="e16accc9a4827c98e6a0cffc89ebda381b5ab6e2a277982a9dd1470b52ffdd53" Jan 28 15:54:13 crc kubenswrapper[4871]: I0128 15:54:13.406639 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e16accc9a4827c98e6a0cffc89ebda381b5ab6e2a277982a9dd1470b52ffdd53"} err="failed to get container status \"e16accc9a4827c98e6a0cffc89ebda381b5ab6e2a277982a9dd1470b52ffdd53\": rpc error: code = NotFound desc = could not find container \"e16accc9a4827c98e6a0cffc89ebda381b5ab6e2a277982a9dd1470b52ffdd53\": container with ID starting with e16accc9a4827c98e6a0cffc89ebda381b5ab6e2a277982a9dd1470b52ffdd53 not found: ID does not exist" Jan 28 15:54:13 crc kubenswrapper[4871]: I0128 15:54:13.653634 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lssmh"] Jan 28 15:54:13 crc kubenswrapper[4871]: I0128 15:54:13.666863 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lssmh"] Jan 28 15:54:13 crc kubenswrapper[4871]: I0128 15:54:13.813103 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:54:13 crc kubenswrapper[4871]: I0128 15:54:13.813163 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:54:13 crc kubenswrapper[4871]: I0128 15:54:13.813204 4871 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 15:54:13 crc kubenswrapper[4871]: I0128 15:54:13.813683 4871 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"102f9d6a8c8eb16b03feb0138985e3dc7c9eddd1199f871e408408308b296fbd"} pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:54:13 crc kubenswrapper[4871]: I0128 15:54:13.813738 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" containerID="cri-o://102f9d6a8c8eb16b03feb0138985e3dc7c9eddd1199f871e408408308b296fbd" gracePeriod=600 Jan 28 15:54:14 crc kubenswrapper[4871]: I0128 15:54:14.331304 4871 generic.go:334] "Generic (PLEG): container finished" podID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerID="102f9d6a8c8eb16b03feb0138985e3dc7c9eddd1199f871e408408308b296fbd" exitCode=0 Jan 28 15:54:14 crc kubenswrapper[4871]: I0128 15:54:14.331391 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerDied","Data":"102f9d6a8c8eb16b03feb0138985e3dc7c9eddd1199f871e408408308b296fbd"} Jan 28 15:54:14 crc kubenswrapper[4871]: I0128 15:54:14.331686 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerStarted","Data":"02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299"} Jan 28 15:54:14 crc kubenswrapper[4871]: I0128 15:54:14.331721 4871 scope.go:117] "RemoveContainer" containerID="7444f15d3daefde9cede2c5874664d07691116dba0499fc669145956d62558f6" Jan 28 15:54:14 crc kubenswrapper[4871]: I0128 15:54:14.914227 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af6de6b9-774a-4453-bc72-d19dc9cfe7d2" path="/var/lib/kubelet/pods/af6de6b9-774a-4453-bc72-d19dc9cfe7d2/volumes" Jan 28 15:56:43 crc kubenswrapper[4871]: I0128 15:56:43.813729 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:56:43 crc kubenswrapper[4871]: I0128 15:56:43.814340 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:57:13 crc kubenswrapper[4871]: I0128 15:57:13.813369 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:57:13 crc kubenswrapper[4871]: I0128 15:57:13.813842 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:57:43 crc kubenswrapper[4871]: I0128 15:57:43.814133 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:57:43 crc kubenswrapper[4871]: I0128 15:57:43.814737 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:57:43 crc kubenswrapper[4871]: I0128 15:57:43.814793 4871 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 15:57:43 crc kubenswrapper[4871]: I0128 15:57:43.815572 4871 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299"} pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:57:43 crc kubenswrapper[4871]: I0128 15:57:43.815652 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" containerID="cri-o://02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" gracePeriod=600 Jan 28 15:57:43 crc kubenswrapper[4871]: E0128 15:57:43.982314 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:57:44 crc kubenswrapper[4871]: I0128 15:57:44.243749 4871 generic.go:334] "Generic (PLEG): container finished" podID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" exitCode=0 Jan 28 15:57:44 crc kubenswrapper[4871]: I0128 15:57:44.243787 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerDied","Data":"02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299"} Jan 28 15:57:44 crc kubenswrapper[4871]: I0128 15:57:44.243817 4871 scope.go:117] "RemoveContainer" containerID="102f9d6a8c8eb16b03feb0138985e3dc7c9eddd1199f871e408408308b296fbd" Jan 28 15:57:44 crc kubenswrapper[4871]: I0128 15:57:44.244485 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 15:57:44 crc kubenswrapper[4871]: E0128 15:57:44.244903 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:57:56 crc kubenswrapper[4871]: I0128 15:57:56.904052 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 15:57:56 crc kubenswrapper[4871]: E0128 15:57:56.904954 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:58:09 crc kubenswrapper[4871]: I0128 15:58:09.904084 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 15:58:09 crc kubenswrapper[4871]: E0128 15:58:09.905619 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:58:21 crc kubenswrapper[4871]: I0128 15:58:21.905395 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 15:58:21 crc kubenswrapper[4871]: E0128 15:58:21.906798 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:58:33 crc kubenswrapper[4871]: I0128 15:58:33.904008 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 15:58:33 crc kubenswrapper[4871]: E0128 15:58:33.904994 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:58:45 crc kubenswrapper[4871]: I0128 15:58:45.903566 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 15:58:45 crc kubenswrapper[4871]: E0128 15:58:45.904426 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:59:00 crc kubenswrapper[4871]: I0128 15:59:00.904814 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 15:59:00 crc kubenswrapper[4871]: E0128 15:59:00.905668 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:59:13 crc kubenswrapper[4871]: I0128 15:59:13.903531 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 15:59:13 crc kubenswrapper[4871]: E0128 15:59:13.904321 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:59:24 crc kubenswrapper[4871]: I0128 15:59:24.904369 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 15:59:24 crc kubenswrapper[4871]: E0128 15:59:24.906620 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:59:37 crc kubenswrapper[4871]: I0128 15:59:37.904017 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 15:59:37 crc kubenswrapper[4871]: E0128 15:59:37.904789 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 15:59:51 crc kubenswrapper[4871]: I0128 15:59:51.904070 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 15:59:51 crc kubenswrapper[4871]: E0128 15:59:51.904981 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:00:00 crc kubenswrapper[4871]: I0128 16:00:00.144633 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493600-khsmg"] Jan 28 16:00:00 crc kubenswrapper[4871]: E0128 16:00:00.145506 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af6de6b9-774a-4453-bc72-d19dc9cfe7d2" containerName="extract-content" Jan 28 16:00:00 crc kubenswrapper[4871]: I0128 16:00:00.145523 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6de6b9-774a-4453-bc72-d19dc9cfe7d2" containerName="extract-content" Jan 28 16:00:00 crc kubenswrapper[4871]: E0128 16:00:00.145542 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af6de6b9-774a-4453-bc72-d19dc9cfe7d2" containerName="registry-server" Jan 28 16:00:00 crc kubenswrapper[4871]: I0128 16:00:00.145549 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6de6b9-774a-4453-bc72-d19dc9cfe7d2" containerName="registry-server" Jan 28 16:00:00 crc kubenswrapper[4871]: E0128 16:00:00.145578 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af6de6b9-774a-4453-bc72-d19dc9cfe7d2" containerName="extract-utilities" Jan 28 16:00:00 crc kubenswrapper[4871]: I0128 16:00:00.145607 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6de6b9-774a-4453-bc72-d19dc9cfe7d2" containerName="extract-utilities" Jan 28 16:00:00 crc kubenswrapper[4871]: I0128 16:00:00.145761 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="af6de6b9-774a-4453-bc72-d19dc9cfe7d2" containerName="registry-server" Jan 28 16:00:00 crc kubenswrapper[4871]: I0128 16:00:00.146275 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-khsmg" Jan 28 16:00:00 crc kubenswrapper[4871]: I0128 16:00:00.147826 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 16:00:00 crc kubenswrapper[4871]: I0128 16:00:00.148926 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 16:00:00 crc kubenswrapper[4871]: I0128 16:00:00.159369 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493600-khsmg"] Jan 28 16:00:00 crc kubenswrapper[4871]: I0128 16:00:00.174900 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9be911fb-0960-4c6d-8ca5-4e60291122e7-secret-volume\") pod \"collect-profiles-29493600-khsmg\" (UID: \"9be911fb-0960-4c6d-8ca5-4e60291122e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-khsmg" Jan 28 16:00:00 crc kubenswrapper[4871]: I0128 16:00:00.175029 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9be911fb-0960-4c6d-8ca5-4e60291122e7-config-volume\") pod \"collect-profiles-29493600-khsmg\" (UID: \"9be911fb-0960-4c6d-8ca5-4e60291122e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-khsmg" Jan 28 16:00:00 crc kubenswrapper[4871]: I0128 16:00:00.175156 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lg7b\" (UniqueName: \"kubernetes.io/projected/9be911fb-0960-4c6d-8ca5-4e60291122e7-kube-api-access-5lg7b\") pod \"collect-profiles-29493600-khsmg\" (UID: \"9be911fb-0960-4c6d-8ca5-4e60291122e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-khsmg" Jan 28 16:00:00 crc kubenswrapper[4871]: I0128 16:00:00.276671 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9be911fb-0960-4c6d-8ca5-4e60291122e7-config-volume\") pod \"collect-profiles-29493600-khsmg\" (UID: \"9be911fb-0960-4c6d-8ca5-4e60291122e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-khsmg" Jan 28 16:00:00 crc kubenswrapper[4871]: I0128 16:00:00.276747 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lg7b\" (UniqueName: \"kubernetes.io/projected/9be911fb-0960-4c6d-8ca5-4e60291122e7-kube-api-access-5lg7b\") pod \"collect-profiles-29493600-khsmg\" (UID: \"9be911fb-0960-4c6d-8ca5-4e60291122e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-khsmg" Jan 28 16:00:00 crc kubenswrapper[4871]: I0128 16:00:00.276793 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9be911fb-0960-4c6d-8ca5-4e60291122e7-secret-volume\") pod \"collect-profiles-29493600-khsmg\" (UID: \"9be911fb-0960-4c6d-8ca5-4e60291122e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-khsmg" Jan 28 16:00:00 crc kubenswrapper[4871]: I0128 16:00:00.279021 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9be911fb-0960-4c6d-8ca5-4e60291122e7-config-volume\") pod \"collect-profiles-29493600-khsmg\" (UID: \"9be911fb-0960-4c6d-8ca5-4e60291122e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-khsmg" Jan 28 16:00:00 crc kubenswrapper[4871]: I0128 16:00:00.282460 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9be911fb-0960-4c6d-8ca5-4e60291122e7-secret-volume\") pod \"collect-profiles-29493600-khsmg\" (UID: \"9be911fb-0960-4c6d-8ca5-4e60291122e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-khsmg" Jan 28 16:00:00 crc kubenswrapper[4871]: I0128 16:00:00.292365 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lg7b\" (UniqueName: \"kubernetes.io/projected/9be911fb-0960-4c6d-8ca5-4e60291122e7-kube-api-access-5lg7b\") pod \"collect-profiles-29493600-khsmg\" (UID: \"9be911fb-0960-4c6d-8ca5-4e60291122e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-khsmg" Jan 28 16:00:00 crc kubenswrapper[4871]: I0128 16:00:00.478621 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-khsmg" Jan 28 16:00:00 crc kubenswrapper[4871]: I0128 16:00:00.895690 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493600-khsmg"] Jan 28 16:00:01 crc kubenswrapper[4871]: I0128 16:00:01.331679 4871 generic.go:334] "Generic (PLEG): container finished" podID="9be911fb-0960-4c6d-8ca5-4e60291122e7" containerID="a7323f1075c35eb6e8d6feb02e2659f16af38d6d6befc213e52015938ed76a89" exitCode=0 Jan 28 16:00:01 crc kubenswrapper[4871]: I0128 16:00:01.331977 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-khsmg" event={"ID":"9be911fb-0960-4c6d-8ca5-4e60291122e7","Type":"ContainerDied","Data":"a7323f1075c35eb6e8d6feb02e2659f16af38d6d6befc213e52015938ed76a89"} Jan 28 16:00:01 crc kubenswrapper[4871]: I0128 16:00:01.332377 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-khsmg" event={"ID":"9be911fb-0960-4c6d-8ca5-4e60291122e7","Type":"ContainerStarted","Data":"d29a2a9555160ad835867c9ede53119c11116c55703b333eaf8485c412ab8290"} Jan 28 16:00:02 crc kubenswrapper[4871]: I0128 16:00:02.752012 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-khsmg" Jan 28 16:00:02 crc kubenswrapper[4871]: I0128 16:00:02.824269 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9be911fb-0960-4c6d-8ca5-4e60291122e7-config-volume\") pod \"9be911fb-0960-4c6d-8ca5-4e60291122e7\" (UID: \"9be911fb-0960-4c6d-8ca5-4e60291122e7\") " Jan 28 16:00:02 crc kubenswrapper[4871]: I0128 16:00:02.824539 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9be911fb-0960-4c6d-8ca5-4e60291122e7-secret-volume\") pod \"9be911fb-0960-4c6d-8ca5-4e60291122e7\" (UID: \"9be911fb-0960-4c6d-8ca5-4e60291122e7\") " Jan 28 16:00:02 crc kubenswrapper[4871]: I0128 16:00:02.824638 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lg7b\" (UniqueName: \"kubernetes.io/projected/9be911fb-0960-4c6d-8ca5-4e60291122e7-kube-api-access-5lg7b\") pod \"9be911fb-0960-4c6d-8ca5-4e60291122e7\" (UID: \"9be911fb-0960-4c6d-8ca5-4e60291122e7\") " Jan 28 16:00:02 crc kubenswrapper[4871]: I0128 16:00:02.825944 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9be911fb-0960-4c6d-8ca5-4e60291122e7-config-volume" (OuterVolumeSpecName: "config-volume") pod "9be911fb-0960-4c6d-8ca5-4e60291122e7" (UID: "9be911fb-0960-4c6d-8ca5-4e60291122e7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:00:02 crc kubenswrapper[4871]: I0128 16:00:02.831290 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9be911fb-0960-4c6d-8ca5-4e60291122e7-kube-api-access-5lg7b" (OuterVolumeSpecName: "kube-api-access-5lg7b") pod "9be911fb-0960-4c6d-8ca5-4e60291122e7" (UID: "9be911fb-0960-4c6d-8ca5-4e60291122e7"). InnerVolumeSpecName "kube-api-access-5lg7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:00:02 crc kubenswrapper[4871]: I0128 16:00:02.832669 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be911fb-0960-4c6d-8ca5-4e60291122e7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9be911fb-0960-4c6d-8ca5-4e60291122e7" (UID: "9be911fb-0960-4c6d-8ca5-4e60291122e7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:00:02 crc kubenswrapper[4871]: I0128 16:00:02.905113 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 16:00:02 crc kubenswrapper[4871]: E0128 16:00:02.905365 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:00:02 crc kubenswrapper[4871]: I0128 16:00:02.927544 4871 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9be911fb-0960-4c6d-8ca5-4e60291122e7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 16:00:02 crc kubenswrapper[4871]: I0128 16:00:02.927625 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lg7b\" (UniqueName: \"kubernetes.io/projected/9be911fb-0960-4c6d-8ca5-4e60291122e7-kube-api-access-5lg7b\") on node \"crc\" DevicePath \"\"" Jan 28 16:00:02 crc kubenswrapper[4871]: I0128 16:00:02.927641 4871 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9be911fb-0960-4c6d-8ca5-4e60291122e7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 16:00:03 crc kubenswrapper[4871]: I0128 16:00:03.358919 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-khsmg" event={"ID":"9be911fb-0960-4c6d-8ca5-4e60291122e7","Type":"ContainerDied","Data":"d29a2a9555160ad835867c9ede53119c11116c55703b333eaf8485c412ab8290"} Jan 28 16:00:03 crc kubenswrapper[4871]: I0128 16:00:03.358962 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d29a2a9555160ad835867c9ede53119c11116c55703b333eaf8485c412ab8290" Jan 28 16:00:03 crc kubenswrapper[4871]: I0128 16:00:03.358968 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-khsmg" Jan 28 16:00:03 crc kubenswrapper[4871]: I0128 16:00:03.831103 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493555-5cqh5"] Jan 28 16:00:03 crc kubenswrapper[4871]: I0128 16:00:03.837408 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493555-5cqh5"] Jan 28 16:00:04 crc kubenswrapper[4871]: I0128 16:00:04.920101 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad449099-f3be-4711-8c75-a8fab2eabda3" path="/var/lib/kubelet/pods/ad449099-f3be-4711-8c75-a8fab2eabda3/volumes" Jan 28 16:00:16 crc kubenswrapper[4871]: I0128 16:00:16.904485 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 16:00:16 crc kubenswrapper[4871]: E0128 16:00:16.905475 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:00:29 crc kubenswrapper[4871]: I0128 16:00:29.904353 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 16:00:29 crc kubenswrapper[4871]: E0128 16:00:29.905482 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:00:31 crc kubenswrapper[4871]: I0128 16:00:31.260026 4871 scope.go:117] "RemoveContainer" containerID="79cc46a1eca7a4e33f4dd296c5858b950b28cd2fd1673f2a40414cd660aed505" Jan 28 16:00:41 crc kubenswrapper[4871]: I0128 16:00:41.903447 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 16:00:41 crc kubenswrapper[4871]: E0128 16:00:41.904297 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:00:54 crc kubenswrapper[4871]: I0128 16:00:54.905490 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 16:00:54 crc kubenswrapper[4871]: E0128 16:00:54.906807 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:01:08 crc kubenswrapper[4871]: I0128 16:01:08.912517 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 16:01:08 crc kubenswrapper[4871]: E0128 16:01:08.913966 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:01:21 crc kubenswrapper[4871]: I0128 16:01:21.904876 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 16:01:21 crc kubenswrapper[4871]: E0128 16:01:21.907429 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:01:32 crc kubenswrapper[4871]: I0128 16:01:32.905241 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 16:01:32 crc kubenswrapper[4871]: E0128 16:01:32.906686 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:01:44 crc kubenswrapper[4871]: I0128 16:01:44.904673 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 16:01:44 crc kubenswrapper[4871]: E0128 16:01:44.905579 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:01:56 crc kubenswrapper[4871]: I0128 16:01:56.904634 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 16:01:56 crc kubenswrapper[4871]: E0128 16:01:56.905562 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:02:10 crc kubenswrapper[4871]: I0128 16:02:10.904538 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 16:02:10 crc kubenswrapper[4871]: E0128 16:02:10.905880 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:02:19 crc kubenswrapper[4871]: I0128 16:02:19.489813 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mxmsz"] Jan 28 16:02:19 crc kubenswrapper[4871]: E0128 16:02:19.490820 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be911fb-0960-4c6d-8ca5-4e60291122e7" containerName="collect-profiles" Jan 28 16:02:19 crc kubenswrapper[4871]: I0128 16:02:19.490841 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be911fb-0960-4c6d-8ca5-4e60291122e7" containerName="collect-profiles" Jan 28 16:02:19 crc kubenswrapper[4871]: I0128 16:02:19.491112 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="9be911fb-0960-4c6d-8ca5-4e60291122e7" containerName="collect-profiles" Jan 28 16:02:19 crc kubenswrapper[4871]: I0128 16:02:19.493352 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mxmsz" Jan 28 16:02:19 crc kubenswrapper[4871]: I0128 16:02:19.501177 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mxmsz"] Jan 28 16:02:19 crc kubenswrapper[4871]: I0128 16:02:19.532401 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bd17859-a8cb-46cd-8e18-e7e8bb25f483-utilities\") pod \"redhat-operators-mxmsz\" (UID: \"2bd17859-a8cb-46cd-8e18-e7e8bb25f483\") " pod="openshift-marketplace/redhat-operators-mxmsz" Jan 28 16:02:19 crc kubenswrapper[4871]: I0128 16:02:19.532675 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bd17859-a8cb-46cd-8e18-e7e8bb25f483-catalog-content\") pod \"redhat-operators-mxmsz\" (UID: \"2bd17859-a8cb-46cd-8e18-e7e8bb25f483\") " pod="openshift-marketplace/redhat-operators-mxmsz" Jan 28 16:02:19 crc kubenswrapper[4871]: I0128 16:02:19.532853 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gql5\" (UniqueName: \"kubernetes.io/projected/2bd17859-a8cb-46cd-8e18-e7e8bb25f483-kube-api-access-9gql5\") pod \"redhat-operators-mxmsz\" (UID: \"2bd17859-a8cb-46cd-8e18-e7e8bb25f483\") " pod="openshift-marketplace/redhat-operators-mxmsz" Jan 28 16:02:19 crc kubenswrapper[4871]: I0128 16:02:19.634240 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bd17859-a8cb-46cd-8e18-e7e8bb25f483-catalog-content\") pod \"redhat-operators-mxmsz\" (UID: \"2bd17859-a8cb-46cd-8e18-e7e8bb25f483\") " pod="openshift-marketplace/redhat-operators-mxmsz" Jan 28 16:02:19 crc kubenswrapper[4871]: I0128 16:02:19.634341 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gql5\" (UniqueName: \"kubernetes.io/projected/2bd17859-a8cb-46cd-8e18-e7e8bb25f483-kube-api-access-9gql5\") pod \"redhat-operators-mxmsz\" (UID: \"2bd17859-a8cb-46cd-8e18-e7e8bb25f483\") " pod="openshift-marketplace/redhat-operators-mxmsz" Jan 28 16:02:19 crc kubenswrapper[4871]: I0128 16:02:19.634400 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bd17859-a8cb-46cd-8e18-e7e8bb25f483-utilities\") pod \"redhat-operators-mxmsz\" (UID: \"2bd17859-a8cb-46cd-8e18-e7e8bb25f483\") " pod="openshift-marketplace/redhat-operators-mxmsz" Jan 28 16:02:19 crc kubenswrapper[4871]: I0128 16:02:19.634934 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bd17859-a8cb-46cd-8e18-e7e8bb25f483-catalog-content\") pod \"redhat-operators-mxmsz\" (UID: \"2bd17859-a8cb-46cd-8e18-e7e8bb25f483\") " pod="openshift-marketplace/redhat-operators-mxmsz" Jan 28 16:02:19 crc kubenswrapper[4871]: I0128 16:02:19.634970 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bd17859-a8cb-46cd-8e18-e7e8bb25f483-utilities\") pod \"redhat-operators-mxmsz\" (UID: \"2bd17859-a8cb-46cd-8e18-e7e8bb25f483\") " pod="openshift-marketplace/redhat-operators-mxmsz" Jan 28 16:02:19 crc kubenswrapper[4871]: I0128 16:02:19.654845 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gql5\" (UniqueName: \"kubernetes.io/projected/2bd17859-a8cb-46cd-8e18-e7e8bb25f483-kube-api-access-9gql5\") pod \"redhat-operators-mxmsz\" (UID: \"2bd17859-a8cb-46cd-8e18-e7e8bb25f483\") " pod="openshift-marketplace/redhat-operators-mxmsz" Jan 28 16:02:19 crc kubenswrapper[4871]: I0128 16:02:19.816482 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mxmsz" Jan 28 16:02:20 crc kubenswrapper[4871]: I0128 16:02:20.277710 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mxmsz"] Jan 28 16:02:21 crc kubenswrapper[4871]: I0128 16:02:21.030639 4871 generic.go:334] "Generic (PLEG): container finished" podID="2bd17859-a8cb-46cd-8e18-e7e8bb25f483" containerID="1f1d5f8712d9c8906c8186a5b75f92573336860512305cf962b17e46a7dfa512" exitCode=0 Jan 28 16:02:21 crc kubenswrapper[4871]: I0128 16:02:21.030676 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxmsz" event={"ID":"2bd17859-a8cb-46cd-8e18-e7e8bb25f483","Type":"ContainerDied","Data":"1f1d5f8712d9c8906c8186a5b75f92573336860512305cf962b17e46a7dfa512"} Jan 28 16:02:21 crc kubenswrapper[4871]: I0128 16:02:21.030702 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxmsz" event={"ID":"2bd17859-a8cb-46cd-8e18-e7e8bb25f483","Type":"ContainerStarted","Data":"ade40c148e8999611b04cbcc435dd323f0806808ca9fd6804680a9c496cd376e"} Jan 28 16:02:21 crc kubenswrapper[4871]: I0128 16:02:21.032658 4871 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 16:02:22 crc kubenswrapper[4871]: I0128 16:02:22.038657 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxmsz" event={"ID":"2bd17859-a8cb-46cd-8e18-e7e8bb25f483","Type":"ContainerStarted","Data":"5cac8aef34d780025f986097c389491d7c08b8019d9dee21a97b518e15716864"} Jan 28 16:02:23 crc kubenswrapper[4871]: I0128 16:02:23.048882 4871 generic.go:334] "Generic (PLEG): container finished" podID="2bd17859-a8cb-46cd-8e18-e7e8bb25f483" containerID="5cac8aef34d780025f986097c389491d7c08b8019d9dee21a97b518e15716864" exitCode=0 Jan 28 16:02:23 crc kubenswrapper[4871]: I0128 16:02:23.048925 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxmsz" event={"ID":"2bd17859-a8cb-46cd-8e18-e7e8bb25f483","Type":"ContainerDied","Data":"5cac8aef34d780025f986097c389491d7c08b8019d9dee21a97b518e15716864"} Jan 28 16:02:24 crc kubenswrapper[4871]: I0128 16:02:24.059106 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxmsz" event={"ID":"2bd17859-a8cb-46cd-8e18-e7e8bb25f483","Type":"ContainerStarted","Data":"7f9d252bafd4fef386862dc3b0fce3b45b9802c2a9c46e359a9aa73562aa1af6"} Jan 28 16:02:24 crc kubenswrapper[4871]: I0128 16:02:24.079532 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mxmsz" podStartSLOduration=2.271806659 podStartE2EDuration="5.079514426s" podCreationTimestamp="2026-01-28 16:02:19 +0000 UTC" firstStartedPulling="2026-01-28 16:02:21.032316901 +0000 UTC m=+2692.928155233" lastFinishedPulling="2026-01-28 16:02:23.840024678 +0000 UTC m=+2695.735863000" observedRunningTime="2026-01-28 16:02:24.07871486 +0000 UTC m=+2695.974553202" watchObservedRunningTime="2026-01-28 16:02:24.079514426 +0000 UTC m=+2695.975352748" Jan 28 16:02:24 crc kubenswrapper[4871]: I0128 16:02:24.906887 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 16:02:24 crc kubenswrapper[4871]: E0128 16:02:24.907496 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:02:29 crc kubenswrapper[4871]: I0128 16:02:29.817290 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mxmsz" Jan 28 16:02:29 crc kubenswrapper[4871]: I0128 16:02:29.817958 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mxmsz" Jan 28 16:02:29 crc kubenswrapper[4871]: I0128 16:02:29.867461 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mxmsz" Jan 28 16:02:30 crc kubenswrapper[4871]: I0128 16:02:30.154133 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mxmsz" Jan 28 16:02:30 crc kubenswrapper[4871]: I0128 16:02:30.203486 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mxmsz"] Jan 28 16:02:32 crc kubenswrapper[4871]: I0128 16:02:32.127300 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mxmsz" podUID="2bd17859-a8cb-46cd-8e18-e7e8bb25f483" containerName="registry-server" containerID="cri-o://7f9d252bafd4fef386862dc3b0fce3b45b9802c2a9c46e359a9aa73562aa1af6" gracePeriod=2 Jan 28 16:02:32 crc kubenswrapper[4871]: I0128 16:02:32.518643 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mxmsz" Jan 28 16:02:32 crc kubenswrapper[4871]: I0128 16:02:32.644694 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bd17859-a8cb-46cd-8e18-e7e8bb25f483-utilities\") pod \"2bd17859-a8cb-46cd-8e18-e7e8bb25f483\" (UID: \"2bd17859-a8cb-46cd-8e18-e7e8bb25f483\") " Jan 28 16:02:32 crc kubenswrapper[4871]: I0128 16:02:32.644778 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bd17859-a8cb-46cd-8e18-e7e8bb25f483-catalog-content\") pod \"2bd17859-a8cb-46cd-8e18-e7e8bb25f483\" (UID: \"2bd17859-a8cb-46cd-8e18-e7e8bb25f483\") " Jan 28 16:02:32 crc kubenswrapper[4871]: I0128 16:02:32.645020 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gql5\" (UniqueName: \"kubernetes.io/projected/2bd17859-a8cb-46cd-8e18-e7e8bb25f483-kube-api-access-9gql5\") pod \"2bd17859-a8cb-46cd-8e18-e7e8bb25f483\" (UID: \"2bd17859-a8cb-46cd-8e18-e7e8bb25f483\") " Jan 28 16:02:32 crc kubenswrapper[4871]: I0128 16:02:32.645553 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bd17859-a8cb-46cd-8e18-e7e8bb25f483-utilities" (OuterVolumeSpecName: "utilities") pod "2bd17859-a8cb-46cd-8e18-e7e8bb25f483" (UID: "2bd17859-a8cb-46cd-8e18-e7e8bb25f483"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:02:32 crc kubenswrapper[4871]: I0128 16:02:32.654893 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd17859-a8cb-46cd-8e18-e7e8bb25f483-kube-api-access-9gql5" (OuterVolumeSpecName: "kube-api-access-9gql5") pod "2bd17859-a8cb-46cd-8e18-e7e8bb25f483" (UID: "2bd17859-a8cb-46cd-8e18-e7e8bb25f483"). InnerVolumeSpecName "kube-api-access-9gql5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:02:32 crc kubenswrapper[4871]: I0128 16:02:32.746751 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gql5\" (UniqueName: \"kubernetes.io/projected/2bd17859-a8cb-46cd-8e18-e7e8bb25f483-kube-api-access-9gql5\") on node \"crc\" DevicePath \"\"" Jan 28 16:02:32 crc kubenswrapper[4871]: I0128 16:02:32.746791 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bd17859-a8cb-46cd-8e18-e7e8bb25f483-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:02:32 crc kubenswrapper[4871]: I0128 16:02:32.800681 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bd17859-a8cb-46cd-8e18-e7e8bb25f483-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2bd17859-a8cb-46cd-8e18-e7e8bb25f483" (UID: "2bd17859-a8cb-46cd-8e18-e7e8bb25f483"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:02:32 crc kubenswrapper[4871]: I0128 16:02:32.847928 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bd17859-a8cb-46cd-8e18-e7e8bb25f483-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:02:33 crc kubenswrapper[4871]: I0128 16:02:33.138242 4871 generic.go:334] "Generic (PLEG): container finished" podID="2bd17859-a8cb-46cd-8e18-e7e8bb25f483" containerID="7f9d252bafd4fef386862dc3b0fce3b45b9802c2a9c46e359a9aa73562aa1af6" exitCode=0 Jan 28 16:02:33 crc kubenswrapper[4871]: I0128 16:02:33.138295 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxmsz" event={"ID":"2bd17859-a8cb-46cd-8e18-e7e8bb25f483","Type":"ContainerDied","Data":"7f9d252bafd4fef386862dc3b0fce3b45b9802c2a9c46e359a9aa73562aa1af6"} Jan 28 16:02:33 crc kubenswrapper[4871]: I0128 16:02:33.138328 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxmsz" event={"ID":"2bd17859-a8cb-46cd-8e18-e7e8bb25f483","Type":"ContainerDied","Data":"ade40c148e8999611b04cbcc435dd323f0806808ca9fd6804680a9c496cd376e"} Jan 28 16:02:33 crc kubenswrapper[4871]: I0128 16:02:33.138350 4871 scope.go:117] "RemoveContainer" containerID="7f9d252bafd4fef386862dc3b0fce3b45b9802c2a9c46e359a9aa73562aa1af6" Jan 28 16:02:33 crc kubenswrapper[4871]: I0128 16:02:33.138500 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mxmsz" Jan 28 16:02:33 crc kubenswrapper[4871]: I0128 16:02:33.171370 4871 scope.go:117] "RemoveContainer" containerID="5cac8aef34d780025f986097c389491d7c08b8019d9dee21a97b518e15716864" Jan 28 16:02:33 crc kubenswrapper[4871]: I0128 16:02:33.178654 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mxmsz"] Jan 28 16:02:33 crc kubenswrapper[4871]: I0128 16:02:33.188428 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mxmsz"] Jan 28 16:02:33 crc kubenswrapper[4871]: I0128 16:02:33.202927 4871 scope.go:117] "RemoveContainer" containerID="1f1d5f8712d9c8906c8186a5b75f92573336860512305cf962b17e46a7dfa512" Jan 28 16:02:33 crc kubenswrapper[4871]: I0128 16:02:33.234464 4871 scope.go:117] "RemoveContainer" containerID="7f9d252bafd4fef386862dc3b0fce3b45b9802c2a9c46e359a9aa73562aa1af6" Jan 28 16:02:33 crc kubenswrapper[4871]: E0128 16:02:33.235198 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f9d252bafd4fef386862dc3b0fce3b45b9802c2a9c46e359a9aa73562aa1af6\": container with ID starting with 7f9d252bafd4fef386862dc3b0fce3b45b9802c2a9c46e359a9aa73562aa1af6 not found: ID does not exist" containerID="7f9d252bafd4fef386862dc3b0fce3b45b9802c2a9c46e359a9aa73562aa1af6" Jan 28 16:02:33 crc kubenswrapper[4871]: I0128 16:02:33.235263 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f9d252bafd4fef386862dc3b0fce3b45b9802c2a9c46e359a9aa73562aa1af6"} err="failed to get container status \"7f9d252bafd4fef386862dc3b0fce3b45b9802c2a9c46e359a9aa73562aa1af6\": rpc error: code = NotFound desc = could not find container \"7f9d252bafd4fef386862dc3b0fce3b45b9802c2a9c46e359a9aa73562aa1af6\": container with ID starting with 7f9d252bafd4fef386862dc3b0fce3b45b9802c2a9c46e359a9aa73562aa1af6 not found: ID does not exist" Jan 28 16:02:33 crc kubenswrapper[4871]: I0128 16:02:33.235293 4871 scope.go:117] "RemoveContainer" containerID="5cac8aef34d780025f986097c389491d7c08b8019d9dee21a97b518e15716864" Jan 28 16:02:33 crc kubenswrapper[4871]: E0128 16:02:33.236180 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cac8aef34d780025f986097c389491d7c08b8019d9dee21a97b518e15716864\": container with ID starting with 5cac8aef34d780025f986097c389491d7c08b8019d9dee21a97b518e15716864 not found: ID does not exist" containerID="5cac8aef34d780025f986097c389491d7c08b8019d9dee21a97b518e15716864" Jan 28 16:02:33 crc kubenswrapper[4871]: I0128 16:02:33.236235 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cac8aef34d780025f986097c389491d7c08b8019d9dee21a97b518e15716864"} err="failed to get container status \"5cac8aef34d780025f986097c389491d7c08b8019d9dee21a97b518e15716864\": rpc error: code = NotFound desc = could not find container \"5cac8aef34d780025f986097c389491d7c08b8019d9dee21a97b518e15716864\": container with ID starting with 5cac8aef34d780025f986097c389491d7c08b8019d9dee21a97b518e15716864 not found: ID does not exist" Jan 28 16:02:33 crc kubenswrapper[4871]: I0128 16:02:33.236276 4871 scope.go:117] "RemoveContainer" containerID="1f1d5f8712d9c8906c8186a5b75f92573336860512305cf962b17e46a7dfa512" Jan 28 16:02:33 crc kubenswrapper[4871]: E0128 16:02:33.236655 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f1d5f8712d9c8906c8186a5b75f92573336860512305cf962b17e46a7dfa512\": container with ID starting with 1f1d5f8712d9c8906c8186a5b75f92573336860512305cf962b17e46a7dfa512 not found: ID does not exist" containerID="1f1d5f8712d9c8906c8186a5b75f92573336860512305cf962b17e46a7dfa512" Jan 28 16:02:33 crc kubenswrapper[4871]: I0128 16:02:33.236710 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f1d5f8712d9c8906c8186a5b75f92573336860512305cf962b17e46a7dfa512"} err="failed to get container status \"1f1d5f8712d9c8906c8186a5b75f92573336860512305cf962b17e46a7dfa512\": rpc error: code = NotFound desc = could not find container \"1f1d5f8712d9c8906c8186a5b75f92573336860512305cf962b17e46a7dfa512\": container with ID starting with 1f1d5f8712d9c8906c8186a5b75f92573336860512305cf962b17e46a7dfa512 not found: ID does not exist" Jan 28 16:02:34 crc kubenswrapper[4871]: I0128 16:02:34.913261 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bd17859-a8cb-46cd-8e18-e7e8bb25f483" path="/var/lib/kubelet/pods/2bd17859-a8cb-46cd-8e18-e7e8bb25f483/volumes" Jan 28 16:02:36 crc kubenswrapper[4871]: I0128 16:02:36.904291 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 16:02:36 crc kubenswrapper[4871]: E0128 16:02:36.904894 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:02:47 crc kubenswrapper[4871]: I0128 16:02:47.904090 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 16:02:48 crc kubenswrapper[4871]: I0128 16:02:48.258287 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerStarted","Data":"43cb38f5cda5c90164b6b38b56d82a43200106b8ddf13410e4e80153656563fe"} Jan 28 16:02:56 crc kubenswrapper[4871]: I0128 16:02:56.672797 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5rsk8"] Jan 28 16:02:56 crc kubenswrapper[4871]: E0128 16:02:56.674218 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd17859-a8cb-46cd-8e18-e7e8bb25f483" containerName="extract-utilities" Jan 28 16:02:56 crc kubenswrapper[4871]: I0128 16:02:56.674241 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd17859-a8cb-46cd-8e18-e7e8bb25f483" containerName="extract-utilities" Jan 28 16:02:56 crc kubenswrapper[4871]: E0128 16:02:56.674282 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd17859-a8cb-46cd-8e18-e7e8bb25f483" containerName="registry-server" Jan 28 16:02:56 crc kubenswrapper[4871]: I0128 16:02:56.674291 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd17859-a8cb-46cd-8e18-e7e8bb25f483" containerName="registry-server" Jan 28 16:02:56 crc kubenswrapper[4871]: E0128 16:02:56.674321 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd17859-a8cb-46cd-8e18-e7e8bb25f483" containerName="extract-content" Jan 28 16:02:56 crc kubenswrapper[4871]: I0128 16:02:56.674330 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd17859-a8cb-46cd-8e18-e7e8bb25f483" containerName="extract-content" Jan 28 16:02:56 crc kubenswrapper[4871]: I0128 16:02:56.674932 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd17859-a8cb-46cd-8e18-e7e8bb25f483" containerName="registry-server" Jan 28 16:02:56 crc kubenswrapper[4871]: I0128 16:02:56.677316 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rsk8" Jan 28 16:02:56 crc kubenswrapper[4871]: I0128 16:02:56.688960 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rsk8"] Jan 28 16:02:56 crc kubenswrapper[4871]: I0128 16:02:56.777301 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr8c8\" (UniqueName: \"kubernetes.io/projected/4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8-kube-api-access-jr8c8\") pod \"redhat-marketplace-5rsk8\" (UID: \"4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8\") " pod="openshift-marketplace/redhat-marketplace-5rsk8" Jan 28 16:02:56 crc kubenswrapper[4871]: I0128 16:02:56.777377 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8-catalog-content\") pod \"redhat-marketplace-5rsk8\" (UID: \"4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8\") " pod="openshift-marketplace/redhat-marketplace-5rsk8" Jan 28 16:02:56 crc kubenswrapper[4871]: I0128 16:02:56.777521 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8-utilities\") pod \"redhat-marketplace-5rsk8\" (UID: \"4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8\") " pod="openshift-marketplace/redhat-marketplace-5rsk8" Jan 28 16:02:56 crc kubenswrapper[4871]: I0128 16:02:56.878831 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr8c8\" (UniqueName: \"kubernetes.io/projected/4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8-kube-api-access-jr8c8\") pod \"redhat-marketplace-5rsk8\" (UID: \"4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8\") " pod="openshift-marketplace/redhat-marketplace-5rsk8" Jan 28 16:02:56 crc kubenswrapper[4871]: I0128 16:02:56.879220 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8-catalog-content\") pod \"redhat-marketplace-5rsk8\" (UID: \"4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8\") " pod="openshift-marketplace/redhat-marketplace-5rsk8" Jan 28 16:02:56 crc kubenswrapper[4871]: I0128 16:02:56.879262 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8-utilities\") pod \"redhat-marketplace-5rsk8\" (UID: \"4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8\") " pod="openshift-marketplace/redhat-marketplace-5rsk8" Jan 28 16:02:56 crc kubenswrapper[4871]: I0128 16:02:56.879704 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8-catalog-content\") pod \"redhat-marketplace-5rsk8\" (UID: \"4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8\") " pod="openshift-marketplace/redhat-marketplace-5rsk8" Jan 28 16:02:56 crc kubenswrapper[4871]: I0128 16:02:56.879740 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8-utilities\") pod \"redhat-marketplace-5rsk8\" (UID: \"4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8\") " pod="openshift-marketplace/redhat-marketplace-5rsk8" Jan 28 16:02:56 crc kubenswrapper[4871]: I0128 16:02:56.910341 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr8c8\" (UniqueName: \"kubernetes.io/projected/4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8-kube-api-access-jr8c8\") pod \"redhat-marketplace-5rsk8\" (UID: \"4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8\") " pod="openshift-marketplace/redhat-marketplace-5rsk8" Jan 28 16:02:57 crc kubenswrapper[4871]: I0128 16:02:57.001635 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rsk8" Jan 28 16:02:57 crc kubenswrapper[4871]: I0128 16:02:57.483740 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rsk8"] Jan 28 16:02:58 crc kubenswrapper[4871]: I0128 16:02:58.349466 4871 generic.go:334] "Generic (PLEG): container finished" podID="4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8" containerID="b8a93d8dd8b4032f9d90d0a1ffa027490ee83a97e3589592f106cf4e5ae5324a" exitCode=0 Jan 28 16:02:58 crc kubenswrapper[4871]: I0128 16:02:58.349804 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rsk8" event={"ID":"4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8","Type":"ContainerDied","Data":"b8a93d8dd8b4032f9d90d0a1ffa027490ee83a97e3589592f106cf4e5ae5324a"} Jan 28 16:02:58 crc kubenswrapper[4871]: I0128 16:02:58.349833 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rsk8" event={"ID":"4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8","Type":"ContainerStarted","Data":"e2b66cc6a319e1f6f66812888e29babd9c0576c735e4e6cd2c457b70866fd8db"} Jan 28 16:02:59 crc kubenswrapper[4871]: I0128 16:02:59.369910 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rsk8" event={"ID":"4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8","Type":"ContainerStarted","Data":"cf100dd042bf9e94054549bafffd4f4adeed192127752e04f64a71ee7c0461e2"} Jan 28 16:03:00 crc kubenswrapper[4871]: I0128 16:03:00.381484 4871 generic.go:334] "Generic (PLEG): container finished" podID="4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8" containerID="cf100dd042bf9e94054549bafffd4f4adeed192127752e04f64a71ee7c0461e2" exitCode=0 Jan 28 16:03:00 crc kubenswrapper[4871]: I0128 16:03:00.381573 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rsk8" event={"ID":"4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8","Type":"ContainerDied","Data":"cf100dd042bf9e94054549bafffd4f4adeed192127752e04f64a71ee7c0461e2"} Jan 28 16:03:01 crc kubenswrapper[4871]: I0128 16:03:01.391860 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rsk8" event={"ID":"4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8","Type":"ContainerStarted","Data":"da1f184b400c0c1b6c1547e940dc48bba21e0c4081fff7ca6f3bfef9009f82c8"} Jan 28 16:03:07 crc kubenswrapper[4871]: I0128 16:03:07.002763 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5rsk8" Jan 28 16:03:07 crc kubenswrapper[4871]: I0128 16:03:07.003677 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5rsk8" Jan 28 16:03:07 crc kubenswrapper[4871]: I0128 16:03:07.070087 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5rsk8" Jan 28 16:03:07 crc kubenswrapper[4871]: I0128 16:03:07.097968 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5rsk8" podStartSLOduration=8.297490991 podStartE2EDuration="11.097949401s" podCreationTimestamp="2026-01-28 16:02:56 +0000 UTC" firstStartedPulling="2026-01-28 16:02:58.351171553 +0000 UTC m=+2730.247009875" lastFinishedPulling="2026-01-28 16:03:01.151629963 +0000 UTC m=+2733.047468285" observedRunningTime="2026-01-28 16:03:01.416744783 +0000 UTC m=+2733.312583135" watchObservedRunningTime="2026-01-28 16:03:07.097949401 +0000 UTC m=+2738.993787723" Jan 28 16:03:07 crc kubenswrapper[4871]: I0128 16:03:07.483734 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5rsk8" Jan 28 16:03:07 crc kubenswrapper[4871]: I0128 16:03:07.525760 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rsk8"] Jan 28 16:03:09 crc kubenswrapper[4871]: I0128 16:03:09.452430 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5rsk8" podUID="4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8" containerName="registry-server" containerID="cri-o://da1f184b400c0c1b6c1547e940dc48bba21e0c4081fff7ca6f3bfef9009f82c8" gracePeriod=2 Jan 28 16:03:09 crc kubenswrapper[4871]: I0128 16:03:09.887534 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rsk8" Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.007546 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8-utilities\") pod \"4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8\" (UID: \"4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8\") " Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.007666 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr8c8\" (UniqueName: \"kubernetes.io/projected/4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8-kube-api-access-jr8c8\") pod \"4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8\" (UID: \"4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8\") " Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.007761 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8-catalog-content\") pod \"4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8\" (UID: \"4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8\") " Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.010713 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8-utilities" (OuterVolumeSpecName: "utilities") pod "4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8" (UID: "4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.015152 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8-kube-api-access-jr8c8" (OuterVolumeSpecName: "kube-api-access-jr8c8") pod "4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8" (UID: "4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8"). InnerVolumeSpecName "kube-api-access-jr8c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.052034 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8" (UID: "4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.110284 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.110327 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr8c8\" (UniqueName: \"kubernetes.io/projected/4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8-kube-api-access-jr8c8\") on node \"crc\" DevicePath \"\"" Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.110340 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.461919 4871 generic.go:334] "Generic (PLEG): container finished" podID="4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8" containerID="da1f184b400c0c1b6c1547e940dc48bba21e0c4081fff7ca6f3bfef9009f82c8" exitCode=0 Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.461963 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rsk8" event={"ID":"4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8","Type":"ContainerDied","Data":"da1f184b400c0c1b6c1547e940dc48bba21e0c4081fff7ca6f3bfef9009f82c8"} Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.461970 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rsk8" Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.461998 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rsk8" event={"ID":"4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8","Type":"ContainerDied","Data":"e2b66cc6a319e1f6f66812888e29babd9c0576c735e4e6cd2c457b70866fd8db"} Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.462017 4871 scope.go:117] "RemoveContainer" containerID="da1f184b400c0c1b6c1547e940dc48bba21e0c4081fff7ca6f3bfef9009f82c8" Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.483961 4871 scope.go:117] "RemoveContainer" containerID="cf100dd042bf9e94054549bafffd4f4adeed192127752e04f64a71ee7c0461e2" Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.495429 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rsk8"] Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.501386 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rsk8"] Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.525542 4871 scope.go:117] "RemoveContainer" containerID="b8a93d8dd8b4032f9d90d0a1ffa027490ee83a97e3589592f106cf4e5ae5324a" Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.540842 4871 scope.go:117] "RemoveContainer" containerID="da1f184b400c0c1b6c1547e940dc48bba21e0c4081fff7ca6f3bfef9009f82c8" Jan 28 16:03:10 crc kubenswrapper[4871]: E0128 16:03:10.541224 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da1f184b400c0c1b6c1547e940dc48bba21e0c4081fff7ca6f3bfef9009f82c8\": container with ID starting with da1f184b400c0c1b6c1547e940dc48bba21e0c4081fff7ca6f3bfef9009f82c8 not found: ID does not exist" containerID="da1f184b400c0c1b6c1547e940dc48bba21e0c4081fff7ca6f3bfef9009f82c8" Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.541263 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da1f184b400c0c1b6c1547e940dc48bba21e0c4081fff7ca6f3bfef9009f82c8"} err="failed to get container status \"da1f184b400c0c1b6c1547e940dc48bba21e0c4081fff7ca6f3bfef9009f82c8\": rpc error: code = NotFound desc = could not find container \"da1f184b400c0c1b6c1547e940dc48bba21e0c4081fff7ca6f3bfef9009f82c8\": container with ID starting with da1f184b400c0c1b6c1547e940dc48bba21e0c4081fff7ca6f3bfef9009f82c8 not found: ID does not exist" Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.541300 4871 scope.go:117] "RemoveContainer" containerID="cf100dd042bf9e94054549bafffd4f4adeed192127752e04f64a71ee7c0461e2" Jan 28 16:03:10 crc kubenswrapper[4871]: E0128 16:03:10.541833 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf100dd042bf9e94054549bafffd4f4adeed192127752e04f64a71ee7c0461e2\": container with ID starting with cf100dd042bf9e94054549bafffd4f4adeed192127752e04f64a71ee7c0461e2 not found: ID does not exist" containerID="cf100dd042bf9e94054549bafffd4f4adeed192127752e04f64a71ee7c0461e2" Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.541888 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf100dd042bf9e94054549bafffd4f4adeed192127752e04f64a71ee7c0461e2"} err="failed to get container status \"cf100dd042bf9e94054549bafffd4f4adeed192127752e04f64a71ee7c0461e2\": rpc error: code = NotFound desc = could not find container \"cf100dd042bf9e94054549bafffd4f4adeed192127752e04f64a71ee7c0461e2\": container with ID starting with cf100dd042bf9e94054549bafffd4f4adeed192127752e04f64a71ee7c0461e2 not found: ID does not exist" Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.541928 4871 scope.go:117] "RemoveContainer" containerID="b8a93d8dd8b4032f9d90d0a1ffa027490ee83a97e3589592f106cf4e5ae5324a" Jan 28 16:03:10 crc kubenswrapper[4871]: E0128 16:03:10.542342 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8a93d8dd8b4032f9d90d0a1ffa027490ee83a97e3589592f106cf4e5ae5324a\": container with ID starting with b8a93d8dd8b4032f9d90d0a1ffa027490ee83a97e3589592f106cf4e5ae5324a not found: ID does not exist" containerID="b8a93d8dd8b4032f9d90d0a1ffa027490ee83a97e3589592f106cf4e5ae5324a" Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.542375 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8a93d8dd8b4032f9d90d0a1ffa027490ee83a97e3589592f106cf4e5ae5324a"} err="failed to get container status \"b8a93d8dd8b4032f9d90d0a1ffa027490ee83a97e3589592f106cf4e5ae5324a\": rpc error: code = NotFound desc = could not find container \"b8a93d8dd8b4032f9d90d0a1ffa027490ee83a97e3589592f106cf4e5ae5324a\": container with ID starting with b8a93d8dd8b4032f9d90d0a1ffa027490ee83a97e3589592f106cf4e5ae5324a not found: ID does not exist" Jan 28 16:03:10 crc kubenswrapper[4871]: I0128 16:03:10.921617 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8" path="/var/lib/kubelet/pods/4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8/volumes" Jan 28 16:03:48 crc kubenswrapper[4871]: I0128 16:03:48.009649 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b7fcf"] Jan 28 16:03:48 crc kubenswrapper[4871]: E0128 16:03:48.010735 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8" containerName="extract-content" Jan 28 16:03:48 crc kubenswrapper[4871]: I0128 16:03:48.010749 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8" containerName="extract-content" Jan 28 16:03:48 crc kubenswrapper[4871]: E0128 16:03:48.010794 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8" containerName="registry-server" Jan 28 16:03:48 crc kubenswrapper[4871]: I0128 16:03:48.010801 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8" containerName="registry-server" Jan 28 16:03:48 crc kubenswrapper[4871]: E0128 16:03:48.010814 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8" containerName="extract-utilities" Jan 28 16:03:48 crc kubenswrapper[4871]: I0128 16:03:48.010840 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8" containerName="extract-utilities" Jan 28 16:03:48 crc kubenswrapper[4871]: I0128 16:03:48.011054 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="4159a1a3-4fb0-4a27-8e9b-31ad4e2daef8" containerName="registry-server" Jan 28 16:03:48 crc kubenswrapper[4871]: I0128 16:03:48.012470 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b7fcf" Jan 28 16:03:48 crc kubenswrapper[4871]: I0128 16:03:48.027802 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b7fcf"] Jan 28 16:03:48 crc kubenswrapper[4871]: I0128 16:03:48.155855 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h696\" (UniqueName: \"kubernetes.io/projected/ff85f7c6-824e-410e-a946-fa96b12ffb33-kube-api-access-5h696\") pod \"community-operators-b7fcf\" (UID: \"ff85f7c6-824e-410e-a946-fa96b12ffb33\") " pod="openshift-marketplace/community-operators-b7fcf" Jan 28 16:03:48 crc kubenswrapper[4871]: I0128 16:03:48.156148 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff85f7c6-824e-410e-a946-fa96b12ffb33-catalog-content\") pod \"community-operators-b7fcf\" (UID: \"ff85f7c6-824e-410e-a946-fa96b12ffb33\") " pod="openshift-marketplace/community-operators-b7fcf" Jan 28 16:03:48 crc kubenswrapper[4871]: I0128 16:03:48.156370 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff85f7c6-824e-410e-a946-fa96b12ffb33-utilities\") pod \"community-operators-b7fcf\" (UID: \"ff85f7c6-824e-410e-a946-fa96b12ffb33\") " pod="openshift-marketplace/community-operators-b7fcf" Jan 28 16:03:48 crc kubenswrapper[4871]: I0128 16:03:48.257824 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff85f7c6-824e-410e-a946-fa96b12ffb33-catalog-content\") pod \"community-operators-b7fcf\" (UID: \"ff85f7c6-824e-410e-a946-fa96b12ffb33\") " pod="openshift-marketplace/community-operators-b7fcf" Jan 28 16:03:48 crc kubenswrapper[4871]: I0128 16:03:48.257942 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff85f7c6-824e-410e-a946-fa96b12ffb33-utilities\") pod \"community-operators-b7fcf\" (UID: \"ff85f7c6-824e-410e-a946-fa96b12ffb33\") " pod="openshift-marketplace/community-operators-b7fcf" Jan 28 16:03:48 crc kubenswrapper[4871]: I0128 16:03:48.258014 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h696\" (UniqueName: \"kubernetes.io/projected/ff85f7c6-824e-410e-a946-fa96b12ffb33-kube-api-access-5h696\") pod \"community-operators-b7fcf\" (UID: \"ff85f7c6-824e-410e-a946-fa96b12ffb33\") " pod="openshift-marketplace/community-operators-b7fcf" Jan 28 16:03:48 crc kubenswrapper[4871]: I0128 16:03:48.258731 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff85f7c6-824e-410e-a946-fa96b12ffb33-catalog-content\") pod \"community-operators-b7fcf\" (UID: \"ff85f7c6-824e-410e-a946-fa96b12ffb33\") " pod="openshift-marketplace/community-operators-b7fcf" Jan 28 16:03:48 crc kubenswrapper[4871]: I0128 16:03:48.258814 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff85f7c6-824e-410e-a946-fa96b12ffb33-utilities\") pod \"community-operators-b7fcf\" (UID: \"ff85f7c6-824e-410e-a946-fa96b12ffb33\") " pod="openshift-marketplace/community-operators-b7fcf" Jan 28 16:03:48 crc kubenswrapper[4871]: I0128 16:03:48.282297 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h696\" (UniqueName: \"kubernetes.io/projected/ff85f7c6-824e-410e-a946-fa96b12ffb33-kube-api-access-5h696\") pod \"community-operators-b7fcf\" (UID: \"ff85f7c6-824e-410e-a946-fa96b12ffb33\") " pod="openshift-marketplace/community-operators-b7fcf" Jan 28 16:03:48 crc kubenswrapper[4871]: I0128 16:03:48.333123 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b7fcf" Jan 28 16:03:48 crc kubenswrapper[4871]: I0128 16:03:48.987039 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b7fcf"] Jan 28 16:03:49 crc kubenswrapper[4871]: I0128 16:03:49.794878 4871 generic.go:334] "Generic (PLEG): container finished" podID="ff85f7c6-824e-410e-a946-fa96b12ffb33" containerID="39a1cba9a04588d3da9ed7b145b0afdbffccd1a41884ab16aa7a67615f035f5c" exitCode=0 Jan 28 16:03:49 crc kubenswrapper[4871]: I0128 16:03:49.795108 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7fcf" event={"ID":"ff85f7c6-824e-410e-a946-fa96b12ffb33","Type":"ContainerDied","Data":"39a1cba9a04588d3da9ed7b145b0afdbffccd1a41884ab16aa7a67615f035f5c"} Jan 28 16:03:49 crc kubenswrapper[4871]: I0128 16:03:49.795183 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7fcf" event={"ID":"ff85f7c6-824e-410e-a946-fa96b12ffb33","Type":"ContainerStarted","Data":"cce28cd13558f5876a522809b66fa8d5c00ffe1762b9a4cb7ada7a7fcfb33e4d"} Jan 28 16:03:51 crc kubenswrapper[4871]: I0128 16:03:51.812362 4871 generic.go:334] "Generic (PLEG): container finished" podID="ff85f7c6-824e-410e-a946-fa96b12ffb33" containerID="837496dbab9669695bb0ee8dada1df84d03ec36a39b8c33491dd090a4f003f99" exitCode=0 Jan 28 16:03:51 crc kubenswrapper[4871]: I0128 16:03:51.812441 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7fcf" event={"ID":"ff85f7c6-824e-410e-a946-fa96b12ffb33","Type":"ContainerDied","Data":"837496dbab9669695bb0ee8dada1df84d03ec36a39b8c33491dd090a4f003f99"} Jan 28 16:03:52 crc kubenswrapper[4871]: I0128 16:03:52.823379 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7fcf" event={"ID":"ff85f7c6-824e-410e-a946-fa96b12ffb33","Type":"ContainerStarted","Data":"9cc786dfdfc6617ce2fd5a3556dd47be8d1dddd82a21cca8e8bf5bc94d3adc63"} Jan 28 16:03:52 crc kubenswrapper[4871]: I0128 16:03:52.843107 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b7fcf" podStartSLOduration=3.43069818 podStartE2EDuration="5.843080536s" podCreationTimestamp="2026-01-28 16:03:47 +0000 UTC" firstStartedPulling="2026-01-28 16:03:49.797156131 +0000 UTC m=+2781.692994453" lastFinishedPulling="2026-01-28 16:03:52.209538487 +0000 UTC m=+2784.105376809" observedRunningTime="2026-01-28 16:03:52.840973259 +0000 UTC m=+2784.736811601" watchObservedRunningTime="2026-01-28 16:03:52.843080536 +0000 UTC m=+2784.738918908" Jan 28 16:03:58 crc kubenswrapper[4871]: I0128 16:03:58.334066 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b7fcf" Jan 28 16:03:58 crc kubenswrapper[4871]: I0128 16:03:58.335775 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b7fcf" Jan 28 16:03:58 crc kubenswrapper[4871]: I0128 16:03:58.382751 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b7fcf" Jan 28 16:03:58 crc kubenswrapper[4871]: I0128 16:03:58.913825 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b7fcf" Jan 28 16:03:58 crc kubenswrapper[4871]: I0128 16:03:58.958999 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b7fcf"] Jan 28 16:04:00 crc kubenswrapper[4871]: I0128 16:04:00.880832 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b7fcf" podUID="ff85f7c6-824e-410e-a946-fa96b12ffb33" containerName="registry-server" containerID="cri-o://9cc786dfdfc6617ce2fd5a3556dd47be8d1dddd82a21cca8e8bf5bc94d3adc63" gracePeriod=2 Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.264447 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b7fcf" Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.377471 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff85f7c6-824e-410e-a946-fa96b12ffb33-utilities\") pod \"ff85f7c6-824e-410e-a946-fa96b12ffb33\" (UID: \"ff85f7c6-824e-410e-a946-fa96b12ffb33\") " Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.377538 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h696\" (UniqueName: \"kubernetes.io/projected/ff85f7c6-824e-410e-a946-fa96b12ffb33-kube-api-access-5h696\") pod \"ff85f7c6-824e-410e-a946-fa96b12ffb33\" (UID: \"ff85f7c6-824e-410e-a946-fa96b12ffb33\") " Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.377636 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff85f7c6-824e-410e-a946-fa96b12ffb33-catalog-content\") pod \"ff85f7c6-824e-410e-a946-fa96b12ffb33\" (UID: \"ff85f7c6-824e-410e-a946-fa96b12ffb33\") " Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.378541 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff85f7c6-824e-410e-a946-fa96b12ffb33-utilities" (OuterVolumeSpecName: "utilities") pod "ff85f7c6-824e-410e-a946-fa96b12ffb33" (UID: "ff85f7c6-824e-410e-a946-fa96b12ffb33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.378796 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff85f7c6-824e-410e-a946-fa96b12ffb33-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.383449 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff85f7c6-824e-410e-a946-fa96b12ffb33-kube-api-access-5h696" (OuterVolumeSpecName: "kube-api-access-5h696") pod "ff85f7c6-824e-410e-a946-fa96b12ffb33" (UID: "ff85f7c6-824e-410e-a946-fa96b12ffb33"). InnerVolumeSpecName "kube-api-access-5h696". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.430055 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff85f7c6-824e-410e-a946-fa96b12ffb33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff85f7c6-824e-410e-a946-fa96b12ffb33" (UID: "ff85f7c6-824e-410e-a946-fa96b12ffb33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.480630 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h696\" (UniqueName: \"kubernetes.io/projected/ff85f7c6-824e-410e-a946-fa96b12ffb33-kube-api-access-5h696\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.480671 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff85f7c6-824e-410e-a946-fa96b12ffb33-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.889035 4871 generic.go:334] "Generic (PLEG): container finished" podID="ff85f7c6-824e-410e-a946-fa96b12ffb33" containerID="9cc786dfdfc6617ce2fd5a3556dd47be8d1dddd82a21cca8e8bf5bc94d3adc63" exitCode=0 Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.889084 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7fcf" event={"ID":"ff85f7c6-824e-410e-a946-fa96b12ffb33","Type":"ContainerDied","Data":"9cc786dfdfc6617ce2fd5a3556dd47be8d1dddd82a21cca8e8bf5bc94d3adc63"} Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.889371 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7fcf" event={"ID":"ff85f7c6-824e-410e-a946-fa96b12ffb33","Type":"ContainerDied","Data":"cce28cd13558f5876a522809b66fa8d5c00ffe1762b9a4cb7ada7a7fcfb33e4d"} Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.889391 4871 scope.go:117] "RemoveContainer" containerID="9cc786dfdfc6617ce2fd5a3556dd47be8d1dddd82a21cca8e8bf5bc94d3adc63" Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.889111 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b7fcf" Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.914007 4871 scope.go:117] "RemoveContainer" containerID="837496dbab9669695bb0ee8dada1df84d03ec36a39b8c33491dd090a4f003f99" Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.936756 4871 scope.go:117] "RemoveContainer" containerID="39a1cba9a04588d3da9ed7b145b0afdbffccd1a41884ab16aa7a67615f035f5c" Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.941876 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b7fcf"] Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.948484 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b7fcf"] Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.969739 4871 scope.go:117] "RemoveContainer" containerID="9cc786dfdfc6617ce2fd5a3556dd47be8d1dddd82a21cca8e8bf5bc94d3adc63" Jan 28 16:04:01 crc kubenswrapper[4871]: E0128 16:04:01.970404 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cc786dfdfc6617ce2fd5a3556dd47be8d1dddd82a21cca8e8bf5bc94d3adc63\": container with ID starting with 9cc786dfdfc6617ce2fd5a3556dd47be8d1dddd82a21cca8e8bf5bc94d3adc63 not found: ID does not exist" containerID="9cc786dfdfc6617ce2fd5a3556dd47be8d1dddd82a21cca8e8bf5bc94d3adc63" Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.970461 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cc786dfdfc6617ce2fd5a3556dd47be8d1dddd82a21cca8e8bf5bc94d3adc63"} err="failed to get container status \"9cc786dfdfc6617ce2fd5a3556dd47be8d1dddd82a21cca8e8bf5bc94d3adc63\": rpc error: code = NotFound desc = could not find container \"9cc786dfdfc6617ce2fd5a3556dd47be8d1dddd82a21cca8e8bf5bc94d3adc63\": container with ID starting with 9cc786dfdfc6617ce2fd5a3556dd47be8d1dddd82a21cca8e8bf5bc94d3adc63 not found: ID does not exist" Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.970495 4871 scope.go:117] "RemoveContainer" containerID="837496dbab9669695bb0ee8dada1df84d03ec36a39b8c33491dd090a4f003f99" Jan 28 16:04:01 crc kubenswrapper[4871]: E0128 16:04:01.971100 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"837496dbab9669695bb0ee8dada1df84d03ec36a39b8c33491dd090a4f003f99\": container with ID starting with 837496dbab9669695bb0ee8dada1df84d03ec36a39b8c33491dd090a4f003f99 not found: ID does not exist" containerID="837496dbab9669695bb0ee8dada1df84d03ec36a39b8c33491dd090a4f003f99" Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.971133 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"837496dbab9669695bb0ee8dada1df84d03ec36a39b8c33491dd090a4f003f99"} err="failed to get container status \"837496dbab9669695bb0ee8dada1df84d03ec36a39b8c33491dd090a4f003f99\": rpc error: code = NotFound desc = could not find container \"837496dbab9669695bb0ee8dada1df84d03ec36a39b8c33491dd090a4f003f99\": container with ID starting with 837496dbab9669695bb0ee8dada1df84d03ec36a39b8c33491dd090a4f003f99 not found: ID does not exist" Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.971180 4871 scope.go:117] "RemoveContainer" containerID="39a1cba9a04588d3da9ed7b145b0afdbffccd1a41884ab16aa7a67615f035f5c" Jan 28 16:04:01 crc kubenswrapper[4871]: E0128 16:04:01.971537 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39a1cba9a04588d3da9ed7b145b0afdbffccd1a41884ab16aa7a67615f035f5c\": container with ID starting with 39a1cba9a04588d3da9ed7b145b0afdbffccd1a41884ab16aa7a67615f035f5c not found: ID does not exist" containerID="39a1cba9a04588d3da9ed7b145b0afdbffccd1a41884ab16aa7a67615f035f5c" Jan 28 16:04:01 crc kubenswrapper[4871]: I0128 16:04:01.971574 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39a1cba9a04588d3da9ed7b145b0afdbffccd1a41884ab16aa7a67615f035f5c"} err="failed to get container status \"39a1cba9a04588d3da9ed7b145b0afdbffccd1a41884ab16aa7a67615f035f5c\": rpc error: code = NotFound desc = could not find container \"39a1cba9a04588d3da9ed7b145b0afdbffccd1a41884ab16aa7a67615f035f5c\": container with ID starting with 39a1cba9a04588d3da9ed7b145b0afdbffccd1a41884ab16aa7a67615f035f5c not found: ID does not exist" Jan 28 16:04:02 crc kubenswrapper[4871]: I0128 16:04:02.918029 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff85f7c6-824e-410e-a946-fa96b12ffb33" path="/var/lib/kubelet/pods/ff85f7c6-824e-410e-a946-fa96b12ffb33/volumes" Jan 28 16:04:46 crc kubenswrapper[4871]: I0128 16:04:46.703647 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b4whz"] Jan 28 16:04:46 crc kubenswrapper[4871]: E0128 16:04:46.704483 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff85f7c6-824e-410e-a946-fa96b12ffb33" containerName="extract-content" Jan 28 16:04:46 crc kubenswrapper[4871]: I0128 16:04:46.704496 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff85f7c6-824e-410e-a946-fa96b12ffb33" containerName="extract-content" Jan 28 16:04:46 crc kubenswrapper[4871]: E0128 16:04:46.704515 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff85f7c6-824e-410e-a946-fa96b12ffb33" containerName="registry-server" Jan 28 16:04:46 crc kubenswrapper[4871]: I0128 16:04:46.704522 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff85f7c6-824e-410e-a946-fa96b12ffb33" containerName="registry-server" Jan 28 16:04:46 crc kubenswrapper[4871]: E0128 16:04:46.704550 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff85f7c6-824e-410e-a946-fa96b12ffb33" containerName="extract-utilities" Jan 28 16:04:46 crc kubenswrapper[4871]: I0128 16:04:46.704557 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff85f7c6-824e-410e-a946-fa96b12ffb33" containerName="extract-utilities" Jan 28 16:04:46 crc kubenswrapper[4871]: I0128 16:04:46.704743 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff85f7c6-824e-410e-a946-fa96b12ffb33" containerName="registry-server" Jan 28 16:04:46 crc kubenswrapper[4871]: I0128 16:04:46.705893 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b4whz" Jan 28 16:04:46 crc kubenswrapper[4871]: I0128 16:04:46.716607 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b4whz"] Jan 28 16:04:46 crc kubenswrapper[4871]: I0128 16:04:46.838732 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce39181-9780-4009-a259-c7532a83ef1c-utilities\") pod \"certified-operators-b4whz\" (UID: \"6ce39181-9780-4009-a259-c7532a83ef1c\") " pod="openshift-marketplace/certified-operators-b4whz" Jan 28 16:04:46 crc kubenswrapper[4871]: I0128 16:04:46.838788 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce39181-9780-4009-a259-c7532a83ef1c-catalog-content\") pod \"certified-operators-b4whz\" (UID: \"6ce39181-9780-4009-a259-c7532a83ef1c\") " pod="openshift-marketplace/certified-operators-b4whz" Jan 28 16:04:46 crc kubenswrapper[4871]: I0128 16:04:46.838867 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4crxn\" (UniqueName: \"kubernetes.io/projected/6ce39181-9780-4009-a259-c7532a83ef1c-kube-api-access-4crxn\") pod \"certified-operators-b4whz\" (UID: \"6ce39181-9780-4009-a259-c7532a83ef1c\") " pod="openshift-marketplace/certified-operators-b4whz" Jan 28 16:04:46 crc kubenswrapper[4871]: I0128 16:04:46.940276 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4crxn\" (UniqueName: \"kubernetes.io/projected/6ce39181-9780-4009-a259-c7532a83ef1c-kube-api-access-4crxn\") pod \"certified-operators-b4whz\" (UID: \"6ce39181-9780-4009-a259-c7532a83ef1c\") " pod="openshift-marketplace/certified-operators-b4whz" Jan 28 16:04:46 crc kubenswrapper[4871]: I0128 16:04:46.940480 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce39181-9780-4009-a259-c7532a83ef1c-utilities\") pod \"certified-operators-b4whz\" (UID: \"6ce39181-9780-4009-a259-c7532a83ef1c\") " pod="openshift-marketplace/certified-operators-b4whz" Jan 28 16:04:46 crc kubenswrapper[4871]: I0128 16:04:46.940508 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce39181-9780-4009-a259-c7532a83ef1c-catalog-content\") pod \"certified-operators-b4whz\" (UID: \"6ce39181-9780-4009-a259-c7532a83ef1c\") " pod="openshift-marketplace/certified-operators-b4whz" Jan 28 16:04:46 crc kubenswrapper[4871]: I0128 16:04:46.941053 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce39181-9780-4009-a259-c7532a83ef1c-catalog-content\") pod \"certified-operators-b4whz\" (UID: \"6ce39181-9780-4009-a259-c7532a83ef1c\") " pod="openshift-marketplace/certified-operators-b4whz" Jan 28 16:04:46 crc kubenswrapper[4871]: I0128 16:04:46.941497 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce39181-9780-4009-a259-c7532a83ef1c-utilities\") pod \"certified-operators-b4whz\" (UID: \"6ce39181-9780-4009-a259-c7532a83ef1c\") " pod="openshift-marketplace/certified-operators-b4whz" Jan 28 16:04:46 crc kubenswrapper[4871]: I0128 16:04:46.969464 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4crxn\" (UniqueName: \"kubernetes.io/projected/6ce39181-9780-4009-a259-c7532a83ef1c-kube-api-access-4crxn\") pod \"certified-operators-b4whz\" (UID: \"6ce39181-9780-4009-a259-c7532a83ef1c\") " pod="openshift-marketplace/certified-operators-b4whz" Jan 28 16:04:47 crc kubenswrapper[4871]: I0128 16:04:47.023486 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b4whz" Jan 28 16:04:47 crc kubenswrapper[4871]: I0128 16:04:47.608657 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b4whz"] Jan 28 16:04:48 crc kubenswrapper[4871]: I0128 16:04:48.302621 4871 generic.go:334] "Generic (PLEG): container finished" podID="6ce39181-9780-4009-a259-c7532a83ef1c" containerID="ef5ce53d662adc74f359588a64f043410f86d37ca0837bd2c1e65c63e8534978" exitCode=0 Jan 28 16:04:48 crc kubenswrapper[4871]: I0128 16:04:48.302669 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4whz" event={"ID":"6ce39181-9780-4009-a259-c7532a83ef1c","Type":"ContainerDied","Data":"ef5ce53d662adc74f359588a64f043410f86d37ca0837bd2c1e65c63e8534978"} Jan 28 16:04:48 crc kubenswrapper[4871]: I0128 16:04:48.302699 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4whz" event={"ID":"6ce39181-9780-4009-a259-c7532a83ef1c","Type":"ContainerStarted","Data":"75070d0deb0b31a9dadbabf2de4a7c77a217a7fc42cdc22abc8a40649562e8e6"} Jan 28 16:04:51 crc kubenswrapper[4871]: I0128 16:04:51.334352 4871 generic.go:334] "Generic (PLEG): container finished" podID="6ce39181-9780-4009-a259-c7532a83ef1c" containerID="cc78910dbd0fd5251905e12fa83b2c54509e001cbfd46e73fa28f29e72b06cdf" exitCode=0 Jan 28 16:04:51 crc kubenswrapper[4871]: I0128 16:04:51.334468 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4whz" event={"ID":"6ce39181-9780-4009-a259-c7532a83ef1c","Type":"ContainerDied","Data":"cc78910dbd0fd5251905e12fa83b2c54509e001cbfd46e73fa28f29e72b06cdf"} Jan 28 16:04:53 crc kubenswrapper[4871]: I0128 16:04:53.351854 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4whz" event={"ID":"6ce39181-9780-4009-a259-c7532a83ef1c","Type":"ContainerStarted","Data":"aa8de2e270393ee7690f32b26ff78e4850d0bbd290aa680d87137b7ae9e6957d"} Jan 28 16:04:53 crc kubenswrapper[4871]: I0128 16:04:53.375269 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b4whz" podStartSLOduration=3.6011247490000002 podStartE2EDuration="7.375250499s" podCreationTimestamp="2026-01-28 16:04:46 +0000 UTC" firstStartedPulling="2026-01-28 16:04:48.3059119 +0000 UTC m=+2840.201750242" lastFinishedPulling="2026-01-28 16:04:52.08003767 +0000 UTC m=+2843.975875992" observedRunningTime="2026-01-28 16:04:53.374784395 +0000 UTC m=+2845.270622727" watchObservedRunningTime="2026-01-28 16:04:53.375250499 +0000 UTC m=+2845.271088821" Jan 28 16:04:57 crc kubenswrapper[4871]: I0128 16:04:57.024368 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b4whz" Jan 28 16:04:57 crc kubenswrapper[4871]: I0128 16:04:57.025000 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b4whz" Jan 28 16:04:57 crc kubenswrapper[4871]: I0128 16:04:57.084337 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b4whz" Jan 28 16:04:57 crc kubenswrapper[4871]: I0128 16:04:57.431786 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b4whz" Jan 28 16:05:00 crc kubenswrapper[4871]: I0128 16:05:00.551377 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b4whz"] Jan 28 16:05:00 crc kubenswrapper[4871]: I0128 16:05:00.552138 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b4whz" podUID="6ce39181-9780-4009-a259-c7532a83ef1c" containerName="registry-server" containerID="cri-o://aa8de2e270393ee7690f32b26ff78e4850d0bbd290aa680d87137b7ae9e6957d" gracePeriod=2 Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.012519 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b4whz" Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.093977 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce39181-9780-4009-a259-c7532a83ef1c-catalog-content\") pod \"6ce39181-9780-4009-a259-c7532a83ef1c\" (UID: \"6ce39181-9780-4009-a259-c7532a83ef1c\") " Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.094318 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4crxn\" (UniqueName: \"kubernetes.io/projected/6ce39181-9780-4009-a259-c7532a83ef1c-kube-api-access-4crxn\") pod \"6ce39181-9780-4009-a259-c7532a83ef1c\" (UID: \"6ce39181-9780-4009-a259-c7532a83ef1c\") " Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.094582 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce39181-9780-4009-a259-c7532a83ef1c-utilities\") pod \"6ce39181-9780-4009-a259-c7532a83ef1c\" (UID: \"6ce39181-9780-4009-a259-c7532a83ef1c\") " Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.095518 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ce39181-9780-4009-a259-c7532a83ef1c-utilities" (OuterVolumeSpecName: "utilities") pod "6ce39181-9780-4009-a259-c7532a83ef1c" (UID: "6ce39181-9780-4009-a259-c7532a83ef1c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.101206 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ce39181-9780-4009-a259-c7532a83ef1c-kube-api-access-4crxn" (OuterVolumeSpecName: "kube-api-access-4crxn") pod "6ce39181-9780-4009-a259-c7532a83ef1c" (UID: "6ce39181-9780-4009-a259-c7532a83ef1c"). InnerVolumeSpecName "kube-api-access-4crxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.153387 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ce39181-9780-4009-a259-c7532a83ef1c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ce39181-9780-4009-a259-c7532a83ef1c" (UID: "6ce39181-9780-4009-a259-c7532a83ef1c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.197416 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce39181-9780-4009-a259-c7532a83ef1c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.197473 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4crxn\" (UniqueName: \"kubernetes.io/projected/6ce39181-9780-4009-a259-c7532a83ef1c-kube-api-access-4crxn\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.197491 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce39181-9780-4009-a259-c7532a83ef1c-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.418008 4871 generic.go:334] "Generic (PLEG): container finished" podID="6ce39181-9780-4009-a259-c7532a83ef1c" containerID="aa8de2e270393ee7690f32b26ff78e4850d0bbd290aa680d87137b7ae9e6957d" exitCode=0 Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.418255 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4whz" event={"ID":"6ce39181-9780-4009-a259-c7532a83ef1c","Type":"ContainerDied","Data":"aa8de2e270393ee7690f32b26ff78e4850d0bbd290aa680d87137b7ae9e6957d"} Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.418442 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4whz" event={"ID":"6ce39181-9780-4009-a259-c7532a83ef1c","Type":"ContainerDied","Data":"75070d0deb0b31a9dadbabf2de4a7c77a217a7fc42cdc22abc8a40649562e8e6"} Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.418454 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b4whz" Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.418513 4871 scope.go:117] "RemoveContainer" containerID="aa8de2e270393ee7690f32b26ff78e4850d0bbd290aa680d87137b7ae9e6957d" Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.447705 4871 scope.go:117] "RemoveContainer" containerID="cc78910dbd0fd5251905e12fa83b2c54509e001cbfd46e73fa28f29e72b06cdf" Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.461351 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b4whz"] Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.468659 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b4whz"] Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.478280 4871 scope.go:117] "RemoveContainer" containerID="ef5ce53d662adc74f359588a64f043410f86d37ca0837bd2c1e65c63e8534978" Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.513007 4871 scope.go:117] "RemoveContainer" containerID="aa8de2e270393ee7690f32b26ff78e4850d0bbd290aa680d87137b7ae9e6957d" Jan 28 16:05:01 crc kubenswrapper[4871]: E0128 16:05:01.513505 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa8de2e270393ee7690f32b26ff78e4850d0bbd290aa680d87137b7ae9e6957d\": container with ID starting with aa8de2e270393ee7690f32b26ff78e4850d0bbd290aa680d87137b7ae9e6957d not found: ID does not exist" containerID="aa8de2e270393ee7690f32b26ff78e4850d0bbd290aa680d87137b7ae9e6957d" Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.513565 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa8de2e270393ee7690f32b26ff78e4850d0bbd290aa680d87137b7ae9e6957d"} err="failed to get container status \"aa8de2e270393ee7690f32b26ff78e4850d0bbd290aa680d87137b7ae9e6957d\": rpc error: code = NotFound desc = could not find container \"aa8de2e270393ee7690f32b26ff78e4850d0bbd290aa680d87137b7ae9e6957d\": container with ID starting with aa8de2e270393ee7690f32b26ff78e4850d0bbd290aa680d87137b7ae9e6957d not found: ID does not exist" Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.513615 4871 scope.go:117] "RemoveContainer" containerID="cc78910dbd0fd5251905e12fa83b2c54509e001cbfd46e73fa28f29e72b06cdf" Jan 28 16:05:01 crc kubenswrapper[4871]: E0128 16:05:01.514256 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc78910dbd0fd5251905e12fa83b2c54509e001cbfd46e73fa28f29e72b06cdf\": container with ID starting with cc78910dbd0fd5251905e12fa83b2c54509e001cbfd46e73fa28f29e72b06cdf not found: ID does not exist" containerID="cc78910dbd0fd5251905e12fa83b2c54509e001cbfd46e73fa28f29e72b06cdf" Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.514319 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc78910dbd0fd5251905e12fa83b2c54509e001cbfd46e73fa28f29e72b06cdf"} err="failed to get container status \"cc78910dbd0fd5251905e12fa83b2c54509e001cbfd46e73fa28f29e72b06cdf\": rpc error: code = NotFound desc = could not find container \"cc78910dbd0fd5251905e12fa83b2c54509e001cbfd46e73fa28f29e72b06cdf\": container with ID starting with cc78910dbd0fd5251905e12fa83b2c54509e001cbfd46e73fa28f29e72b06cdf not found: ID does not exist" Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.514355 4871 scope.go:117] "RemoveContainer" containerID="ef5ce53d662adc74f359588a64f043410f86d37ca0837bd2c1e65c63e8534978" Jan 28 16:05:01 crc kubenswrapper[4871]: E0128 16:05:01.514732 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef5ce53d662adc74f359588a64f043410f86d37ca0837bd2c1e65c63e8534978\": container with ID starting with ef5ce53d662adc74f359588a64f043410f86d37ca0837bd2c1e65c63e8534978 not found: ID does not exist" containerID="ef5ce53d662adc74f359588a64f043410f86d37ca0837bd2c1e65c63e8534978" Jan 28 16:05:01 crc kubenswrapper[4871]: I0128 16:05:01.514764 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef5ce53d662adc74f359588a64f043410f86d37ca0837bd2c1e65c63e8534978"} err="failed to get container status \"ef5ce53d662adc74f359588a64f043410f86d37ca0837bd2c1e65c63e8534978\": rpc error: code = NotFound desc = could not find container \"ef5ce53d662adc74f359588a64f043410f86d37ca0837bd2c1e65c63e8534978\": container with ID starting with ef5ce53d662adc74f359588a64f043410f86d37ca0837bd2c1e65c63e8534978 not found: ID does not exist" Jan 28 16:05:02 crc kubenswrapper[4871]: I0128 16:05:02.912854 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ce39181-9780-4009-a259-c7532a83ef1c" path="/var/lib/kubelet/pods/6ce39181-9780-4009-a259-c7532a83ef1c/volumes" Jan 28 16:05:13 crc kubenswrapper[4871]: I0128 16:05:13.813688 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:05:13 crc kubenswrapper[4871]: I0128 16:05:13.814228 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:05:43 crc kubenswrapper[4871]: I0128 16:05:43.813553 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:05:43 crc kubenswrapper[4871]: I0128 16:05:43.814188 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:06:13 crc kubenswrapper[4871]: I0128 16:06:13.813248 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:06:13 crc kubenswrapper[4871]: I0128 16:06:13.813810 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:06:13 crc kubenswrapper[4871]: I0128 16:06:13.813853 4871 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 16:06:13 crc kubenswrapper[4871]: I0128 16:06:13.814445 4871 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"43cb38f5cda5c90164b6b38b56d82a43200106b8ddf13410e4e80153656563fe"} pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 16:06:13 crc kubenswrapper[4871]: I0128 16:06:13.814498 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" containerID="cri-o://43cb38f5cda5c90164b6b38b56d82a43200106b8ddf13410e4e80153656563fe" gracePeriod=600 Jan 28 16:06:14 crc kubenswrapper[4871]: I0128 16:06:14.014093 4871 generic.go:334] "Generic (PLEG): container finished" podID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerID="43cb38f5cda5c90164b6b38b56d82a43200106b8ddf13410e4e80153656563fe" exitCode=0 Jan 28 16:06:14 crc kubenswrapper[4871]: I0128 16:06:14.014130 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerDied","Data":"43cb38f5cda5c90164b6b38b56d82a43200106b8ddf13410e4e80153656563fe"} Jan 28 16:06:14 crc kubenswrapper[4871]: I0128 16:06:14.014456 4871 scope.go:117] "RemoveContainer" containerID="02b4c02b7508a0853c666eb078cd69627122b7b1e7da7c6e76e591a55558b299" Jan 28 16:06:15 crc kubenswrapper[4871]: I0128 16:06:15.025712 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerStarted","Data":"b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b"} Jan 28 16:07:56 crc kubenswrapper[4871]: I0128 16:07:56.301375 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jfvkh/must-gather-cbftf"] Jan 28 16:07:56 crc kubenswrapper[4871]: E0128 16:07:56.302325 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ce39181-9780-4009-a259-c7532a83ef1c" containerName="extract-utilities" Jan 28 16:07:56 crc kubenswrapper[4871]: I0128 16:07:56.302342 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ce39181-9780-4009-a259-c7532a83ef1c" containerName="extract-utilities" Jan 28 16:07:56 crc kubenswrapper[4871]: E0128 16:07:56.302352 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ce39181-9780-4009-a259-c7532a83ef1c" containerName="extract-content" Jan 28 16:07:56 crc kubenswrapper[4871]: I0128 16:07:56.302359 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ce39181-9780-4009-a259-c7532a83ef1c" containerName="extract-content" Jan 28 16:07:56 crc kubenswrapper[4871]: E0128 16:07:56.302388 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ce39181-9780-4009-a259-c7532a83ef1c" containerName="registry-server" Jan 28 16:07:56 crc kubenswrapper[4871]: I0128 16:07:56.302395 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ce39181-9780-4009-a259-c7532a83ef1c" containerName="registry-server" Jan 28 16:07:56 crc kubenswrapper[4871]: I0128 16:07:56.302555 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ce39181-9780-4009-a259-c7532a83ef1c" containerName="registry-server" Jan 28 16:07:56 crc kubenswrapper[4871]: I0128 16:07:56.303392 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfvkh/must-gather-cbftf" Jan 28 16:07:56 crc kubenswrapper[4871]: I0128 16:07:56.309000 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jfvkh"/"openshift-service-ca.crt" Jan 28 16:07:56 crc kubenswrapper[4871]: I0128 16:07:56.309155 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jfvkh"/"kube-root-ca.crt" Jan 28 16:07:56 crc kubenswrapper[4871]: I0128 16:07:56.314521 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jfvkh/must-gather-cbftf"] Jan 28 16:07:56 crc kubenswrapper[4871]: I0128 16:07:56.437765 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgnjw\" (UniqueName: \"kubernetes.io/projected/f662252f-c0a4-4ed5-8c93-67468b6b026c-kube-api-access-qgnjw\") pod \"must-gather-cbftf\" (UID: \"f662252f-c0a4-4ed5-8c93-67468b6b026c\") " pod="openshift-must-gather-jfvkh/must-gather-cbftf" Jan 28 16:07:56 crc kubenswrapper[4871]: I0128 16:07:56.437925 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f662252f-c0a4-4ed5-8c93-67468b6b026c-must-gather-output\") pod \"must-gather-cbftf\" (UID: \"f662252f-c0a4-4ed5-8c93-67468b6b026c\") " pod="openshift-must-gather-jfvkh/must-gather-cbftf" Jan 28 16:07:56 crc kubenswrapper[4871]: I0128 16:07:56.539608 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f662252f-c0a4-4ed5-8c93-67468b6b026c-must-gather-output\") pod \"must-gather-cbftf\" (UID: \"f662252f-c0a4-4ed5-8c93-67468b6b026c\") " pod="openshift-must-gather-jfvkh/must-gather-cbftf" Jan 28 16:07:56 crc kubenswrapper[4871]: I0128 16:07:56.539697 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgnjw\" (UniqueName: \"kubernetes.io/projected/f662252f-c0a4-4ed5-8c93-67468b6b026c-kube-api-access-qgnjw\") pod \"must-gather-cbftf\" (UID: \"f662252f-c0a4-4ed5-8c93-67468b6b026c\") " pod="openshift-must-gather-jfvkh/must-gather-cbftf" Jan 28 16:07:56 crc kubenswrapper[4871]: I0128 16:07:56.540134 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f662252f-c0a4-4ed5-8c93-67468b6b026c-must-gather-output\") pod \"must-gather-cbftf\" (UID: \"f662252f-c0a4-4ed5-8c93-67468b6b026c\") " pod="openshift-must-gather-jfvkh/must-gather-cbftf" Jan 28 16:07:56 crc kubenswrapper[4871]: I0128 16:07:56.561423 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgnjw\" (UniqueName: \"kubernetes.io/projected/f662252f-c0a4-4ed5-8c93-67468b6b026c-kube-api-access-qgnjw\") pod \"must-gather-cbftf\" (UID: \"f662252f-c0a4-4ed5-8c93-67468b6b026c\") " pod="openshift-must-gather-jfvkh/must-gather-cbftf" Jan 28 16:07:56 crc kubenswrapper[4871]: I0128 16:07:56.622817 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfvkh/must-gather-cbftf" Jan 28 16:07:57 crc kubenswrapper[4871]: I0128 16:07:57.136127 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jfvkh/must-gather-cbftf"] Jan 28 16:07:57 crc kubenswrapper[4871]: I0128 16:07:57.150822 4871 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 16:07:57 crc kubenswrapper[4871]: I0128 16:07:57.995957 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jfvkh/must-gather-cbftf" event={"ID":"f662252f-c0a4-4ed5-8c93-67468b6b026c","Type":"ContainerStarted","Data":"6a4f4d39707dae86421ecfc11a7503f78793c97d5309ce9ab46c9bf7177d1e7a"} Jan 28 16:08:03 crc kubenswrapper[4871]: I0128 16:08:03.978793 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jfvkh/crc-debug-bfcsz"] Jan 28 16:08:03 crc kubenswrapper[4871]: I0128 16:08:03.980336 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfvkh/crc-debug-bfcsz" Jan 28 16:08:03 crc kubenswrapper[4871]: I0128 16:08:03.987224 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jfvkh"/"default-dockercfg-sp2dr" Jan 28 16:08:04 crc kubenswrapper[4871]: I0128 16:08:04.039159 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jfvkh/must-gather-cbftf" event={"ID":"f662252f-c0a4-4ed5-8c93-67468b6b026c","Type":"ContainerStarted","Data":"81089ab5611e439396d66f6b46e9f8e9d7474e31cb68f18a24a5fec6094a2035"} Jan 28 16:08:04 crc kubenswrapper[4871]: I0128 16:08:04.039210 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jfvkh/must-gather-cbftf" event={"ID":"f662252f-c0a4-4ed5-8c93-67468b6b026c","Type":"ContainerStarted","Data":"d3b6721e5effce2e7356a1e5142331aa05506ec7edd9319f486a3752fae1f127"} Jan 28 16:08:04 crc kubenswrapper[4871]: I0128 16:08:04.063005 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jfvkh/must-gather-cbftf" podStartSLOduration=2.089793536 podStartE2EDuration="8.062974536s" podCreationTimestamp="2026-01-28 16:07:56 +0000 UTC" firstStartedPulling="2026-01-28 16:07:57.150765953 +0000 UTC m=+3029.046604275" lastFinishedPulling="2026-01-28 16:08:03.123946953 +0000 UTC m=+3035.019785275" observedRunningTime="2026-01-28 16:08:04.053612274 +0000 UTC m=+3035.949450596" watchObservedRunningTime="2026-01-28 16:08:04.062974536 +0000 UTC m=+3035.958812868" Jan 28 16:08:04 crc kubenswrapper[4871]: I0128 16:08:04.092522 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfgrb\" (UniqueName: \"kubernetes.io/projected/bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b-kube-api-access-gfgrb\") pod \"crc-debug-bfcsz\" (UID: \"bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b\") " pod="openshift-must-gather-jfvkh/crc-debug-bfcsz" Jan 28 16:08:04 crc kubenswrapper[4871]: I0128 16:08:04.092691 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b-host\") pod \"crc-debug-bfcsz\" (UID: \"bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b\") " pod="openshift-must-gather-jfvkh/crc-debug-bfcsz" Jan 28 16:08:04 crc kubenswrapper[4871]: I0128 16:08:04.196524 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b-host\") pod \"crc-debug-bfcsz\" (UID: \"bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b\") " pod="openshift-must-gather-jfvkh/crc-debug-bfcsz" Jan 28 16:08:04 crc kubenswrapper[4871]: I0128 16:08:04.196669 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfgrb\" (UniqueName: \"kubernetes.io/projected/bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b-kube-api-access-gfgrb\") pod \"crc-debug-bfcsz\" (UID: \"bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b\") " pod="openshift-must-gather-jfvkh/crc-debug-bfcsz" Jan 28 16:08:04 crc kubenswrapper[4871]: I0128 16:08:04.196702 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b-host\") pod \"crc-debug-bfcsz\" (UID: \"bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b\") " pod="openshift-must-gather-jfvkh/crc-debug-bfcsz" Jan 28 16:08:04 crc kubenswrapper[4871]: I0128 16:08:04.216640 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfgrb\" (UniqueName: \"kubernetes.io/projected/bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b-kube-api-access-gfgrb\") pod \"crc-debug-bfcsz\" (UID: \"bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b\") " pod="openshift-must-gather-jfvkh/crc-debug-bfcsz" Jan 28 16:08:04 crc kubenswrapper[4871]: I0128 16:08:04.300729 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfvkh/crc-debug-bfcsz" Jan 28 16:08:04 crc kubenswrapper[4871]: W0128 16:08:04.329102 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb3eccd9_0af1_4822_b4f8_6fe5c4e0be7b.slice/crio-c4fdb20b99b5db74f134f80d95f929561dc957eb79ff9bdbcb79a7084de7a51d WatchSource:0}: Error finding container c4fdb20b99b5db74f134f80d95f929561dc957eb79ff9bdbcb79a7084de7a51d: Status 404 returned error can't find the container with id c4fdb20b99b5db74f134f80d95f929561dc957eb79ff9bdbcb79a7084de7a51d Jan 28 16:08:05 crc kubenswrapper[4871]: I0128 16:08:05.047808 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jfvkh/crc-debug-bfcsz" event={"ID":"bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b","Type":"ContainerStarted","Data":"c4fdb20b99b5db74f134f80d95f929561dc957eb79ff9bdbcb79a7084de7a51d"} Jan 28 16:08:18 crc kubenswrapper[4871]: I0128 16:08:18.177258 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jfvkh/crc-debug-bfcsz" event={"ID":"bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b","Type":"ContainerStarted","Data":"b4acd5714bc952099e5b475461cac0e61371f7e8a10f907939e56cdd3df57346"} Jan 28 16:08:18 crc kubenswrapper[4871]: I0128 16:08:18.191890 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jfvkh/crc-debug-bfcsz" podStartSLOduration=2.193575546 podStartE2EDuration="15.191868849s" podCreationTimestamp="2026-01-28 16:08:03 +0000 UTC" firstStartedPulling="2026-01-28 16:08:04.3320203 +0000 UTC m=+3036.227858622" lastFinishedPulling="2026-01-28 16:08:17.330313603 +0000 UTC m=+3049.226151925" observedRunningTime="2026-01-28 16:08:18.190200196 +0000 UTC m=+3050.086038538" watchObservedRunningTime="2026-01-28 16:08:18.191868849 +0000 UTC m=+3050.087707171" Jan 28 16:08:42 crc kubenswrapper[4871]: I0128 16:08:42.583996 4871 generic.go:334] "Generic (PLEG): container finished" podID="bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b" containerID="b4acd5714bc952099e5b475461cac0e61371f7e8a10f907939e56cdd3df57346" exitCode=0 Jan 28 16:08:42 crc kubenswrapper[4871]: I0128 16:08:42.584252 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jfvkh/crc-debug-bfcsz" event={"ID":"bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b","Type":"ContainerDied","Data":"b4acd5714bc952099e5b475461cac0e61371f7e8a10f907939e56cdd3df57346"} Jan 28 16:08:43 crc kubenswrapper[4871]: I0128 16:08:43.713710 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfvkh/crc-debug-bfcsz" Jan 28 16:08:43 crc kubenswrapper[4871]: I0128 16:08:43.762346 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jfvkh/crc-debug-bfcsz"] Jan 28 16:08:43 crc kubenswrapper[4871]: I0128 16:08:43.776746 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jfvkh/crc-debug-bfcsz"] Jan 28 16:08:43 crc kubenswrapper[4871]: I0128 16:08:43.813744 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:08:43 crc kubenswrapper[4871]: I0128 16:08:43.813853 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:08:43 crc kubenswrapper[4871]: I0128 16:08:43.876215 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b-host\") pod \"bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b\" (UID: \"bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b\") " Jan 28 16:08:43 crc kubenswrapper[4871]: I0128 16:08:43.876347 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b-host" (OuterVolumeSpecName: "host") pod "bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b" (UID: "bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:08:43 crc kubenswrapper[4871]: I0128 16:08:43.876504 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfgrb\" (UniqueName: \"kubernetes.io/projected/bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b-kube-api-access-gfgrb\") pod \"bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b\" (UID: \"bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b\") " Jan 28 16:08:43 crc kubenswrapper[4871]: I0128 16:08:43.876949 4871 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b-host\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:43 crc kubenswrapper[4871]: I0128 16:08:43.890947 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b-kube-api-access-gfgrb" (OuterVolumeSpecName: "kube-api-access-gfgrb") pod "bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b" (UID: "bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b"). InnerVolumeSpecName "kube-api-access-gfgrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:08:43 crc kubenswrapper[4871]: I0128 16:08:43.980748 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfgrb\" (UniqueName: \"kubernetes.io/projected/bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b-kube-api-access-gfgrb\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:44 crc kubenswrapper[4871]: I0128 16:08:44.604560 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4fdb20b99b5db74f134f80d95f929561dc957eb79ff9bdbcb79a7084de7a51d" Jan 28 16:08:44 crc kubenswrapper[4871]: I0128 16:08:44.604687 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfvkh/crc-debug-bfcsz" Jan 28 16:08:44 crc kubenswrapper[4871]: I0128 16:08:44.915254 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b" path="/var/lib/kubelet/pods/bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b/volumes" Jan 28 16:08:45 crc kubenswrapper[4871]: I0128 16:08:45.074601 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jfvkh/crc-debug-xgtht"] Jan 28 16:08:45 crc kubenswrapper[4871]: E0128 16:08:45.075080 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b" containerName="container-00" Jan 28 16:08:45 crc kubenswrapper[4871]: I0128 16:08:45.075103 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b" containerName="container-00" Jan 28 16:08:45 crc kubenswrapper[4871]: I0128 16:08:45.075320 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb3eccd9-0af1-4822-b4f8-6fe5c4e0be7b" containerName="container-00" Jan 28 16:08:45 crc kubenswrapper[4871]: I0128 16:08:45.076030 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfvkh/crc-debug-xgtht" Jan 28 16:08:45 crc kubenswrapper[4871]: I0128 16:08:45.079712 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jfvkh"/"default-dockercfg-sp2dr" Jan 28 16:08:45 crc kubenswrapper[4871]: I0128 16:08:45.100328 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22b0cdbe-3a6d-4049-bd71-942488121294-host\") pod \"crc-debug-xgtht\" (UID: \"22b0cdbe-3a6d-4049-bd71-942488121294\") " pod="openshift-must-gather-jfvkh/crc-debug-xgtht" Jan 28 16:08:45 crc kubenswrapper[4871]: I0128 16:08:45.100803 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpb2c\" (UniqueName: \"kubernetes.io/projected/22b0cdbe-3a6d-4049-bd71-942488121294-kube-api-access-zpb2c\") pod \"crc-debug-xgtht\" (UID: \"22b0cdbe-3a6d-4049-bd71-942488121294\") " pod="openshift-must-gather-jfvkh/crc-debug-xgtht" Jan 28 16:08:45 crc kubenswrapper[4871]: I0128 16:08:45.202883 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22b0cdbe-3a6d-4049-bd71-942488121294-host\") pod \"crc-debug-xgtht\" (UID: \"22b0cdbe-3a6d-4049-bd71-942488121294\") " pod="openshift-must-gather-jfvkh/crc-debug-xgtht" Jan 28 16:08:45 crc kubenswrapper[4871]: I0128 16:08:45.203013 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpb2c\" (UniqueName: \"kubernetes.io/projected/22b0cdbe-3a6d-4049-bd71-942488121294-kube-api-access-zpb2c\") pod \"crc-debug-xgtht\" (UID: \"22b0cdbe-3a6d-4049-bd71-942488121294\") " pod="openshift-must-gather-jfvkh/crc-debug-xgtht" Jan 28 16:08:45 crc kubenswrapper[4871]: I0128 16:08:45.203264 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22b0cdbe-3a6d-4049-bd71-942488121294-host\") pod \"crc-debug-xgtht\" (UID: \"22b0cdbe-3a6d-4049-bd71-942488121294\") " pod="openshift-must-gather-jfvkh/crc-debug-xgtht" Jan 28 16:08:45 crc kubenswrapper[4871]: I0128 16:08:45.226043 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpb2c\" (UniqueName: \"kubernetes.io/projected/22b0cdbe-3a6d-4049-bd71-942488121294-kube-api-access-zpb2c\") pod \"crc-debug-xgtht\" (UID: \"22b0cdbe-3a6d-4049-bd71-942488121294\") " pod="openshift-must-gather-jfvkh/crc-debug-xgtht" Jan 28 16:08:45 crc kubenswrapper[4871]: I0128 16:08:45.398166 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfvkh/crc-debug-xgtht" Jan 28 16:08:45 crc kubenswrapper[4871]: I0128 16:08:45.615312 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jfvkh/crc-debug-xgtht" event={"ID":"22b0cdbe-3a6d-4049-bd71-942488121294","Type":"ContainerStarted","Data":"f26af6417a040325fff46bafe279d97c19dfbc5d736942499ba992cef079fbf4"} Jan 28 16:08:46 crc kubenswrapper[4871]: I0128 16:08:46.691276 4871 generic.go:334] "Generic (PLEG): container finished" podID="22b0cdbe-3a6d-4049-bd71-942488121294" containerID="63ac287bae68c8948847b7cbe5d11779473d2a3be76a77b9aaad381664527ab3" exitCode=1 Jan 28 16:08:46 crc kubenswrapper[4871]: I0128 16:08:46.691342 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jfvkh/crc-debug-xgtht" event={"ID":"22b0cdbe-3a6d-4049-bd71-942488121294","Type":"ContainerDied","Data":"63ac287bae68c8948847b7cbe5d11779473d2a3be76a77b9aaad381664527ab3"} Jan 28 16:08:46 crc kubenswrapper[4871]: I0128 16:08:46.794924 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jfvkh/crc-debug-xgtht"] Jan 28 16:08:46 crc kubenswrapper[4871]: I0128 16:08:46.811186 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jfvkh/crc-debug-xgtht"] Jan 28 16:08:47 crc kubenswrapper[4871]: I0128 16:08:47.794630 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfvkh/crc-debug-xgtht" Jan 28 16:08:47 crc kubenswrapper[4871]: I0128 16:08:47.856848 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22b0cdbe-3a6d-4049-bd71-942488121294-host\") pod \"22b0cdbe-3a6d-4049-bd71-942488121294\" (UID: \"22b0cdbe-3a6d-4049-bd71-942488121294\") " Jan 28 16:08:47 crc kubenswrapper[4871]: I0128 16:08:47.857004 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22b0cdbe-3a6d-4049-bd71-942488121294-host" (OuterVolumeSpecName: "host") pod "22b0cdbe-3a6d-4049-bd71-942488121294" (UID: "22b0cdbe-3a6d-4049-bd71-942488121294"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:08:47 crc kubenswrapper[4871]: I0128 16:08:47.857083 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpb2c\" (UniqueName: \"kubernetes.io/projected/22b0cdbe-3a6d-4049-bd71-942488121294-kube-api-access-zpb2c\") pod \"22b0cdbe-3a6d-4049-bd71-942488121294\" (UID: \"22b0cdbe-3a6d-4049-bd71-942488121294\") " Jan 28 16:08:47 crc kubenswrapper[4871]: I0128 16:08:47.857637 4871 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22b0cdbe-3a6d-4049-bd71-942488121294-host\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:47 crc kubenswrapper[4871]: I0128 16:08:47.870973 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22b0cdbe-3a6d-4049-bd71-942488121294-kube-api-access-zpb2c" (OuterVolumeSpecName: "kube-api-access-zpb2c") pod "22b0cdbe-3a6d-4049-bd71-942488121294" (UID: "22b0cdbe-3a6d-4049-bd71-942488121294"). InnerVolumeSpecName "kube-api-access-zpb2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:08:47 crc kubenswrapper[4871]: I0128 16:08:47.958979 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpb2c\" (UniqueName: \"kubernetes.io/projected/22b0cdbe-3a6d-4049-bd71-942488121294-kube-api-access-zpb2c\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:48 crc kubenswrapper[4871]: I0128 16:08:48.715311 4871 scope.go:117] "RemoveContainer" containerID="63ac287bae68c8948847b7cbe5d11779473d2a3be76a77b9aaad381664527ab3" Jan 28 16:08:48 crc kubenswrapper[4871]: I0128 16:08:48.715435 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfvkh/crc-debug-xgtht" Jan 28 16:08:48 crc kubenswrapper[4871]: I0128 16:08:48.915959 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22b0cdbe-3a6d-4049-bd71-942488121294" path="/var/lib/kubelet/pods/22b0cdbe-3a6d-4049-bd71-942488121294/volumes" Jan 28 16:09:09 crc kubenswrapper[4871]: I0128 16:09:09.627905 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7ff5475cc9-qq2x4_e3142133-060d-4a0d-9a66-2a58279692e3/init/0.log" Jan 28 16:09:09 crc kubenswrapper[4871]: I0128 16:09:09.880579 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7ff5475cc9-qq2x4_e3142133-060d-4a0d-9a66-2a58279692e3/dnsmasq-dns/0.log" Jan 28 16:09:09 crc kubenswrapper[4871]: I0128 16:09:09.886469 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_24325972-e640-4b7b-b5c9-215dd8cd0fea/kube-state-metrics/0.log" Jan 28 16:09:09 crc kubenswrapper[4871]: I0128 16:09:09.931048 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7ff5475cc9-qq2x4_e3142133-060d-4a0d-9a66-2a58279692e3/init/0.log" Jan 28 16:09:10 crc kubenswrapper[4871]: I0128 16:09:10.143259 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_03065e0e-cdb6-49a2-bfe3-28236f770fdc/memcached/0.log" Jan 28 16:09:10 crc kubenswrapper[4871]: I0128 16:09:10.209006 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_634ee164-2990-4b2b-88e4-ce901728e251/mysql-bootstrap/0.log" Jan 28 16:09:10 crc kubenswrapper[4871]: I0128 16:09:10.387244 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_634ee164-2990-4b2b-88e4-ce901728e251/mysql-bootstrap/0.log" Jan 28 16:09:10 crc kubenswrapper[4871]: I0128 16:09:10.403678 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_634ee164-2990-4b2b-88e4-ce901728e251/galera/0.log" Jan 28 16:09:10 crc kubenswrapper[4871]: I0128 16:09:10.464506 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_7241d8aa-248e-46a8-88af-365415f843f8/mysql-bootstrap/0.log" Jan 28 16:09:10 crc kubenswrapper[4871]: I0128 16:09:10.687909 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_7241d8aa-248e-46a8-88af-365415f843f8/galera/0.log" Jan 28 16:09:10 crc kubenswrapper[4871]: I0128 16:09:10.714081 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-2s4s6_10434904-135c-4ec2-a483-1647ce52500b/ovn-controller/0.log" Jan 28 16:09:10 crc kubenswrapper[4871]: I0128 16:09:10.736911 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_7241d8aa-248e-46a8-88af-365415f843f8/mysql-bootstrap/0.log" Jan 28 16:09:10 crc kubenswrapper[4871]: I0128 16:09:10.910384 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-nl92r_81859795-4888-4eae-8589-5a5a4992d584/openstack-network-exporter/0.log" Jan 28 16:09:11 crc kubenswrapper[4871]: I0128 16:09:11.037256 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c2xpq_8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da/ovsdb-server-init/0.log" Jan 28 16:09:11 crc kubenswrapper[4871]: I0128 16:09:11.198421 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c2xpq_8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da/ovsdb-server-init/0.log" Jan 28 16:09:11 crc kubenswrapper[4871]: I0128 16:09:11.247209 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c2xpq_8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da/ovsdb-server/0.log" Jan 28 16:09:11 crc kubenswrapper[4871]: I0128 16:09:11.264365 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c2xpq_8fcf3e7f-c499-44a8-a2c8-ddb97f31b7da/ovs-vswitchd/0.log" Jan 28 16:09:11 crc kubenswrapper[4871]: I0128 16:09:11.382082 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_eb5764cb-cf47-41e0-9759-d0d894878303/openstack-network-exporter/0.log" Jan 28 16:09:11 crc kubenswrapper[4871]: I0128 16:09:11.443827 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_eb5764cb-cf47-41e0-9759-d0d894878303/ovn-northd/0.log" Jan 28 16:09:11 crc kubenswrapper[4871]: I0128 16:09:11.530540 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8887227a-30f0-4a29-8018-2e18033b3b8f/openstack-network-exporter/0.log" Jan 28 16:09:11 crc kubenswrapper[4871]: I0128 16:09:11.593908 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8887227a-30f0-4a29-8018-2e18033b3b8f/ovsdbserver-nb/0.log" Jan 28 16:09:11 crc kubenswrapper[4871]: I0128 16:09:11.702453 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3fc41ff5-8884-408d-94ca-512e6c34e2d3/openstack-network-exporter/0.log" Jan 28 16:09:11 crc kubenswrapper[4871]: I0128 16:09:11.739984 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3fc41ff5-8884-408d-94ca-512e6c34e2d3/ovsdbserver-sb/0.log" Jan 28 16:09:11 crc kubenswrapper[4871]: I0128 16:09:11.897625 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_48f16980-86d0-4648-9ebd-a428b5253832/setup-container/0.log" Jan 28 16:09:12 crc kubenswrapper[4871]: I0128 16:09:12.041405 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_48f16980-86d0-4648-9ebd-a428b5253832/setup-container/0.log" Jan 28 16:09:12 crc kubenswrapper[4871]: I0128 16:09:12.081777 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_48f16980-86d0-4648-9ebd-a428b5253832/rabbitmq/0.log" Jan 28 16:09:12 crc kubenswrapper[4871]: I0128 16:09:12.121701 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2/setup-container/0.log" Jan 28 16:09:12 crc kubenswrapper[4871]: I0128 16:09:12.274852 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2/setup-container/0.log" Jan 28 16:09:12 crc kubenswrapper[4871]: I0128 16:09:12.321877 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-4jm4z_b5a578f5-c09e-40cd-b9b6-36b7b1f61370/swift-ring-rebalance/0.log" Jan 28 16:09:12 crc kubenswrapper[4871]: I0128 16:09:12.324702 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1f0aeb3c-a1d8-4cdc-b7bd-0db71062d0e2/rabbitmq/0.log" Jan 28 16:09:12 crc kubenswrapper[4871]: I0128 16:09:12.494647 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6e17493-c4b5-417e-b5b2-42a1a245447e/account-auditor/0.log" Jan 28 16:09:12 crc kubenswrapper[4871]: I0128 16:09:12.515998 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6e17493-c4b5-417e-b5b2-42a1a245447e/account-reaper/0.log" Jan 28 16:09:12 crc kubenswrapper[4871]: I0128 16:09:12.564424 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6e17493-c4b5-417e-b5b2-42a1a245447e/account-replicator/0.log" Jan 28 16:09:12 crc kubenswrapper[4871]: I0128 16:09:12.681905 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6e17493-c4b5-417e-b5b2-42a1a245447e/account-server/0.log" Jan 28 16:09:12 crc kubenswrapper[4871]: I0128 16:09:12.697571 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6e17493-c4b5-417e-b5b2-42a1a245447e/container-auditor/0.log" Jan 28 16:09:12 crc kubenswrapper[4871]: I0128 16:09:12.753766 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6e17493-c4b5-417e-b5b2-42a1a245447e/container-server/0.log" Jan 28 16:09:12 crc kubenswrapper[4871]: I0128 16:09:12.755880 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6e17493-c4b5-417e-b5b2-42a1a245447e/container-replicator/0.log" Jan 28 16:09:12 crc kubenswrapper[4871]: I0128 16:09:12.892033 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6e17493-c4b5-417e-b5b2-42a1a245447e/object-auditor/0.log" Jan 28 16:09:12 crc kubenswrapper[4871]: I0128 16:09:12.937064 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6e17493-c4b5-417e-b5b2-42a1a245447e/object-expirer/0.log" Jan 28 16:09:12 crc kubenswrapper[4871]: I0128 16:09:12.947625 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6e17493-c4b5-417e-b5b2-42a1a245447e/container-updater/0.log" Jan 28 16:09:13 crc kubenswrapper[4871]: I0128 16:09:13.036950 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6e17493-c4b5-417e-b5b2-42a1a245447e/object-replicator/0.log" Jan 28 16:09:13 crc kubenswrapper[4871]: I0128 16:09:13.118966 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6e17493-c4b5-417e-b5b2-42a1a245447e/object-server/0.log" Jan 28 16:09:13 crc kubenswrapper[4871]: I0128 16:09:13.150907 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6e17493-c4b5-417e-b5b2-42a1a245447e/object-updater/0.log" Jan 28 16:09:13 crc kubenswrapper[4871]: I0128 16:09:13.199004 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6e17493-c4b5-417e-b5b2-42a1a245447e/rsync/0.log" Jan 28 16:09:13 crc kubenswrapper[4871]: I0128 16:09:13.252461 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6e17493-c4b5-417e-b5b2-42a1a245447e/swift-recon-cron/0.log" Jan 28 16:09:13 crc kubenswrapper[4871]: I0128 16:09:13.813912 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:09:13 crc kubenswrapper[4871]: I0128 16:09:13.814795 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:09:31 crc kubenswrapper[4871]: I0128 16:09:31.442133 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6bc7f4f4cf-hkk6l_270211d8-fb57-4cb0-ba0b-9de5ae660e2e/manager/0.log" Jan 28 16:09:31 crc kubenswrapper[4871]: I0128 16:09:31.637766 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf_93d9bbfd-0623-49fd-979d-4be49534ad36/util/0.log" Jan 28 16:09:31 crc kubenswrapper[4871]: I0128 16:09:31.794901 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf_93d9bbfd-0623-49fd-979d-4be49534ad36/util/0.log" Jan 28 16:09:31 crc kubenswrapper[4871]: I0128 16:09:31.817452 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf_93d9bbfd-0623-49fd-979d-4be49534ad36/pull/0.log" Jan 28 16:09:31 crc kubenswrapper[4871]: I0128 16:09:31.818459 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf_93d9bbfd-0623-49fd-979d-4be49534ad36/pull/0.log" Jan 28 16:09:32 crc kubenswrapper[4871]: I0128 16:09:32.035422 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf_93d9bbfd-0623-49fd-979d-4be49534ad36/extract/0.log" Jan 28 16:09:32 crc kubenswrapper[4871]: I0128 16:09:32.052621 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf_93d9bbfd-0623-49fd-979d-4be49534ad36/util/0.log" Jan 28 16:09:32 crc kubenswrapper[4871]: I0128 16:09:32.061410 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd8lthf_93d9bbfd-0623-49fd-979d-4be49534ad36/pull/0.log" Jan 28 16:09:32 crc kubenswrapper[4871]: I0128 16:09:32.257130 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-f6487bd57-tb9cx_4f068c70-d72a-4582-96e6-891b7269b1ba/manager/0.log" Jan 28 16:09:32 crc kubenswrapper[4871]: I0128 16:09:32.266260 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66dfbd6f5d-vjjlf_95211b62-9193-4fe4-b851-fe46793fac5b/manager/0.log" Jan 28 16:09:32 crc kubenswrapper[4871]: I0128 16:09:32.478548 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6db5dbd896-l6xx2_310a6fc1-965a-4af9-ab12-2c9b2f8046ff/manager/0.log" Jan 28 16:09:32 crc kubenswrapper[4871]: I0128 16:09:32.512891 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-587c6bfdcf-ngtrg_74c0f096-51ac-459a-b9f2-a7cb7f462734/manager/0.log" Jan 28 16:09:32 crc kubenswrapper[4871]: I0128 16:09:32.723853 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-mvh4p_41353a5b-bb79-45e7-8135-8229fa386ce4/manager/0.log" Jan 28 16:09:32 crc kubenswrapper[4871]: I0128 16:09:32.830523 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-vnt6l_91e83f0f-a088-4d2f-a32b-b7aaf38fd6f0/manager/0.log" Jan 28 16:09:32 crc kubenswrapper[4871]: I0128 16:09:32.953347 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-958664b5-98l68_b1b132cc-7a24-4a38-bf0b-6d26b36e551b/manager/0.log" Jan 28 16:09:33 crc kubenswrapper[4871]: I0128 16:09:33.134267 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-765668569f-4ncq7_96fd0c1b-c934-4481-b198-38ca2cb9d187/manager/0.log" Jan 28 16:09:33 crc kubenswrapper[4871]: I0128 16:09:33.144029 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b84b46695-7jmcc_016e77e5-e2ea-4284-966f-16c5773febce/manager/0.log" Jan 28 16:09:33 crc kubenswrapper[4871]: I0128 16:09:33.303039 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-kh8pk_a7b94f34-87cf-4992-9480-4019281227c4/manager/0.log" Jan 28 16:09:33 crc kubenswrapper[4871]: I0128 16:09:33.425730 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-694c5bfc85-cvkg8_866e339c-74a0-47e8-aff9-3463890568a9/manager/0.log" Jan 28 16:09:33 crc kubenswrapper[4871]: I0128 16:09:33.583525 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-ddcbfd695-svf5f_806e885f-b6fc-4e7c-a81a-31bff54b7b06/manager/0.log" Jan 28 16:09:33 crc kubenswrapper[4871]: I0128 16:09:33.630515 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5c765b4558-dz587_31e7b45a-da4b-4920-895a-d51dba36168e/manager/0.log" Jan 28 16:09:33 crc kubenswrapper[4871]: I0128 16:09:33.826360 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4d9hn4g_a19393cd-d011-4387-9a34-07b67bd30d4e/manager/0.log" Jan 28 16:09:34 crc kubenswrapper[4871]: I0128 16:09:34.019253 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-678d9cfb88-shfdl_53e9d66f-621b-4d3e-a96a-1c330a53643b/operator/0.log" Jan 28 16:09:34 crc kubenswrapper[4871]: I0128 16:09:34.307277 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-28hwf_69c2316a-7c90-4a8d-905e-d1499d6dee39/registry-server/0.log" Jan 28 16:09:34 crc kubenswrapper[4871]: I0128 16:09:34.415797 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-57d89bf95c-d4p8j_a1f1fd07-5c03-420c-bb27-e5ec2fece55b/manager/0.log" Jan 28 16:09:34 crc kubenswrapper[4871]: I0128 16:09:34.486984 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-4dvc8_ad851357-ed69-4ed0-80a4-1de2b1725d37/manager/0.log" Jan 28 16:09:34 crc kubenswrapper[4871]: I0128 16:09:34.576296 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-lrwg2_2fa4f067-8eed-44b7-995a-5160ee0576c6/manager/0.log" Jan 28 16:09:34 crc kubenswrapper[4871]: I0128 16:09:34.718300 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-bg5kq_3481a933-7882-4030-b852-9eb2f9f89b88/operator/0.log" Jan 28 16:09:34 crc kubenswrapper[4871]: I0128 16:09:34.839479 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-nrtm8_1441eab7-88f7-4278-b61e-15822bf73aca/manager/0.log" Jan 28 16:09:35 crc kubenswrapper[4871]: I0128 16:09:35.018557 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6d69b9c5db-qvbqw_936d0985-4e97-40ed-b0c4-e0eb92d4372f/manager/0.log" Jan 28 16:09:35 crc kubenswrapper[4871]: I0128 16:09:35.043642 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-d62vq_8552dce5-130b-4598-8101-89ea1c19dc3a/manager/0.log" Jan 28 16:09:35 crc kubenswrapper[4871]: I0128 16:09:35.214790 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-767b8bc766-rqmz6_22008ca1-ed64-4a04-b45a-c9808ad68773/manager/0.log" Jan 28 16:09:43 crc kubenswrapper[4871]: I0128 16:09:43.813752 4871 patch_prober.go:28] interesting pod/machine-config-daemon-7tkqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:09:43 crc kubenswrapper[4871]: I0128 16:09:43.814150 4871 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:09:43 crc kubenswrapper[4871]: I0128 16:09:43.814210 4871 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" Jan 28 16:09:43 crc kubenswrapper[4871]: I0128 16:09:43.815076 4871 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b"} pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 16:09:43 crc kubenswrapper[4871]: I0128 16:09:43.815137 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerName="machine-config-daemon" containerID="cri-o://b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" gracePeriod=600 Jan 28 16:09:43 crc kubenswrapper[4871]: E0128 16:09:43.949232 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:09:44 crc kubenswrapper[4871]: I0128 16:09:44.230040 4871 generic.go:334] "Generic (PLEG): container finished" podID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" exitCode=0 Jan 28 16:09:44 crc kubenswrapper[4871]: I0128 16:09:44.230101 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerDied","Data":"b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b"} Jan 28 16:09:44 crc kubenswrapper[4871]: I0128 16:09:44.230189 4871 scope.go:117] "RemoveContainer" containerID="43cb38f5cda5c90164b6b38b56d82a43200106b8ddf13410e4e80153656563fe" Jan 28 16:09:44 crc kubenswrapper[4871]: I0128 16:09:44.230903 4871 scope.go:117] "RemoveContainer" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" Jan 28 16:09:44 crc kubenswrapper[4871]: E0128 16:09:44.231166 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:09:55 crc kubenswrapper[4871]: I0128 16:09:55.545270 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-lqzgh_28dc3b19-c5e4-4de6-889a-043b95a5f0f2/control-plane-machine-set-operator/0.log" Jan 28 16:09:55 crc kubenswrapper[4871]: I0128 16:09:55.693599 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-p4lhv_dc917f74-9ee2-4f96-baa1-9bc802c0d448/kube-rbac-proxy/0.log" Jan 28 16:09:55 crc kubenswrapper[4871]: I0128 16:09:55.772903 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-p4lhv_dc917f74-9ee2-4f96-baa1-9bc802c0d448/machine-api-operator/0.log" Jan 28 16:09:56 crc kubenswrapper[4871]: I0128 16:09:56.904532 4871 scope.go:117] "RemoveContainer" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" Jan 28 16:09:56 crc kubenswrapper[4871]: E0128 16:09:56.905246 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:10:09 crc kubenswrapper[4871]: I0128 16:10:09.239367 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-l9bx2_cd58f6b5-07a7-4bf5-adc6-db2b4d0e021b/cert-manager-controller/0.log" Jan 28 16:10:09 crc kubenswrapper[4871]: I0128 16:10:09.374020 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-bm7cr_ea8a32e9-2b85-4260-a753-b4379145d43f/cert-manager-cainjector/0.log" Jan 28 16:10:09 crc kubenswrapper[4871]: I0128 16:10:09.463358 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-7p7tg_963e85a0-b8b7-41d3-89cd-437e0cbec396/cert-manager-webhook/0.log" Jan 28 16:10:09 crc kubenswrapper[4871]: I0128 16:10:09.903932 4871 scope.go:117] "RemoveContainer" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" Jan 28 16:10:09 crc kubenswrapper[4871]: E0128 16:10:09.904222 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:10:22 crc kubenswrapper[4871]: I0128 16:10:22.237026 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-zwd77_ea02be43-3af2-4dbc-81f9-f456805b9b8d/nmstate-console-plugin/0.log" Jan 28 16:10:22 crc kubenswrapper[4871]: I0128 16:10:22.424683 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-62t6z_0d70e50a-fc4d-468c-a990-b84318b6db7d/nmstate-handler/0.log" Jan 28 16:10:22 crc kubenswrapper[4871]: I0128 16:10:22.484601 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-k4ztv_adecd4d5-4d6e-42e6-b6a1-25924a745bf4/kube-rbac-proxy/0.log" Jan 28 16:10:22 crc kubenswrapper[4871]: I0128 16:10:22.593746 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-k4ztv_adecd4d5-4d6e-42e6-b6a1-25924a745bf4/nmstate-metrics/0.log" Jan 28 16:10:22 crc kubenswrapper[4871]: I0128 16:10:22.683522 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-wp8qk_5b5280aa-3b45-4da9-9d71-9c2448f5aa6a/nmstate-operator/0.log" Jan 28 16:10:22 crc kubenswrapper[4871]: I0128 16:10:22.771271 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-6lx5n_3bd61983-6d96-440d-969c-6e70160a269e/nmstate-webhook/0.log" Jan 28 16:10:24 crc kubenswrapper[4871]: I0128 16:10:24.904305 4871 scope.go:117] "RemoveContainer" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" Jan 28 16:10:24 crc kubenswrapper[4871]: E0128 16:10:24.904937 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:10:39 crc kubenswrapper[4871]: I0128 16:10:39.904498 4871 scope.go:117] "RemoveContainer" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" Jan 28 16:10:39 crc kubenswrapper[4871]: E0128 16:10:39.905550 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:10:49 crc kubenswrapper[4871]: I0128 16:10:49.955369 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-nhdql_712268de-0e81-4e98-af1c-fb669463f095/kube-rbac-proxy/0.log" Jan 28 16:10:50 crc kubenswrapper[4871]: I0128 16:10:50.111958 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-nhdql_712268de-0e81-4e98-af1c-fb669463f095/controller/0.log" Jan 28 16:10:50 crc kubenswrapper[4871]: I0128 16:10:50.187665 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sxgvx_ab15ac57-d0f0-4f23-95f1-00c1762553d1/cp-frr-files/0.log" Jan 28 16:10:50 crc kubenswrapper[4871]: I0128 16:10:50.364250 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sxgvx_ab15ac57-d0f0-4f23-95f1-00c1762553d1/cp-metrics/0.log" Jan 28 16:10:50 crc kubenswrapper[4871]: I0128 16:10:50.366288 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sxgvx_ab15ac57-d0f0-4f23-95f1-00c1762553d1/cp-reloader/0.log" Jan 28 16:10:50 crc kubenswrapper[4871]: I0128 16:10:50.409769 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sxgvx_ab15ac57-d0f0-4f23-95f1-00c1762553d1/cp-reloader/0.log" Jan 28 16:10:50 crc kubenswrapper[4871]: I0128 16:10:50.410910 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sxgvx_ab15ac57-d0f0-4f23-95f1-00c1762553d1/cp-frr-files/0.log" Jan 28 16:10:50 crc kubenswrapper[4871]: I0128 16:10:50.631751 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sxgvx_ab15ac57-d0f0-4f23-95f1-00c1762553d1/cp-reloader/0.log" Jan 28 16:10:50 crc kubenswrapper[4871]: I0128 16:10:50.645043 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sxgvx_ab15ac57-d0f0-4f23-95f1-00c1762553d1/cp-metrics/0.log" Jan 28 16:10:50 crc kubenswrapper[4871]: I0128 16:10:50.654096 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sxgvx_ab15ac57-d0f0-4f23-95f1-00c1762553d1/cp-metrics/0.log" Jan 28 16:10:50 crc kubenswrapper[4871]: I0128 16:10:50.687310 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sxgvx_ab15ac57-d0f0-4f23-95f1-00c1762553d1/cp-frr-files/0.log" Jan 28 16:10:50 crc kubenswrapper[4871]: I0128 16:10:50.854581 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sxgvx_ab15ac57-d0f0-4f23-95f1-00c1762553d1/cp-metrics/0.log" Jan 28 16:10:50 crc kubenswrapper[4871]: I0128 16:10:50.866437 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sxgvx_ab15ac57-d0f0-4f23-95f1-00c1762553d1/cp-reloader/0.log" Jan 28 16:10:50 crc kubenswrapper[4871]: I0128 16:10:50.879878 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sxgvx_ab15ac57-d0f0-4f23-95f1-00c1762553d1/cp-frr-files/0.log" Jan 28 16:10:50 crc kubenswrapper[4871]: I0128 16:10:50.880961 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sxgvx_ab15ac57-d0f0-4f23-95f1-00c1762553d1/controller/0.log" Jan 28 16:10:51 crc kubenswrapper[4871]: I0128 16:10:51.079558 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sxgvx_ab15ac57-d0f0-4f23-95f1-00c1762553d1/kube-rbac-proxy-frr/0.log" Jan 28 16:10:51 crc kubenswrapper[4871]: I0128 16:10:51.099724 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sxgvx_ab15ac57-d0f0-4f23-95f1-00c1762553d1/frr-metrics/0.log" Jan 28 16:10:51 crc kubenswrapper[4871]: I0128 16:10:51.108887 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sxgvx_ab15ac57-d0f0-4f23-95f1-00c1762553d1/kube-rbac-proxy/0.log" Jan 28 16:10:51 crc kubenswrapper[4871]: I0128 16:10:51.287312 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sxgvx_ab15ac57-d0f0-4f23-95f1-00c1762553d1/reloader/0.log" Jan 28 16:10:51 crc kubenswrapper[4871]: I0128 16:10:51.411982 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-zj59b_11b42cc6-d8ef-4a19-8486-a430ba2f958e/frr-k8s-webhook-server/0.log" Jan 28 16:10:51 crc kubenswrapper[4871]: I0128 16:10:51.517603 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sxgvx_ab15ac57-d0f0-4f23-95f1-00c1762553d1/frr/0.log" Jan 28 16:10:51 crc kubenswrapper[4871]: I0128 16:10:51.630183 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6fbbb975c4-sgkdk_58432f54-c624-4b9e-a13d-f63b16f88543/manager/0.log" Jan 28 16:10:51 crc kubenswrapper[4871]: I0128 16:10:51.798095 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-bc9b59d67-5ggd4_77871b10-57a8-4121-b3db-3389e942bc8b/webhook-server/0.log" Jan 28 16:10:51 crc kubenswrapper[4871]: I0128 16:10:51.875929 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4mfp9_430986a8-e927-489e-888a-6f119020bdda/kube-rbac-proxy/0.log" Jan 28 16:10:52 crc kubenswrapper[4871]: I0128 16:10:52.051225 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4mfp9_430986a8-e927-489e-888a-6f119020bdda/speaker/0.log" Jan 28 16:10:54 crc kubenswrapper[4871]: I0128 16:10:54.904577 4871 scope.go:117] "RemoveContainer" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" Jan 28 16:10:54 crc kubenswrapper[4871]: E0128 16:10:54.905207 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:11:05 crc kubenswrapper[4871]: I0128 16:11:05.470941 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6_c9e4c264-467e-4342-92f0-ee028eb94264/util/0.log" Jan 28 16:11:05 crc kubenswrapper[4871]: I0128 16:11:05.635478 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6_c9e4c264-467e-4342-92f0-ee028eb94264/util/0.log" Jan 28 16:11:05 crc kubenswrapper[4871]: I0128 16:11:05.693296 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6_c9e4c264-467e-4342-92f0-ee028eb94264/pull/0.log" Jan 28 16:11:05 crc kubenswrapper[4871]: I0128 16:11:05.729648 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6_c9e4c264-467e-4342-92f0-ee028eb94264/pull/0.log" Jan 28 16:11:05 crc kubenswrapper[4871]: I0128 16:11:05.944261 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6_c9e4c264-467e-4342-92f0-ee028eb94264/util/0.log" Jan 28 16:11:05 crc kubenswrapper[4871]: I0128 16:11:05.946689 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6_c9e4c264-467e-4342-92f0-ee028eb94264/extract/0.log" Jan 28 16:11:06 crc kubenswrapper[4871]: I0128 16:11:06.021760 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8pmp6_c9e4c264-467e-4342-92f0-ee028eb94264/pull/0.log" Jan 28 16:11:06 crc kubenswrapper[4871]: I0128 16:11:06.134934 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l_a14e6292-a57f-4cda-98ca-3fa791f61964/util/0.log" Jan 28 16:11:06 crc kubenswrapper[4871]: I0128 16:11:06.337252 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l_a14e6292-a57f-4cda-98ca-3fa791f61964/util/0.log" Jan 28 16:11:06 crc kubenswrapper[4871]: I0128 16:11:06.339050 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l_a14e6292-a57f-4cda-98ca-3fa791f61964/pull/0.log" Jan 28 16:11:06 crc kubenswrapper[4871]: I0128 16:11:06.339328 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l_a14e6292-a57f-4cda-98ca-3fa791f61964/pull/0.log" Jan 28 16:11:06 crc kubenswrapper[4871]: I0128 16:11:06.546486 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l_a14e6292-a57f-4cda-98ca-3fa791f61964/pull/0.log" Jan 28 16:11:06 crc kubenswrapper[4871]: I0128 16:11:06.570836 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l_a14e6292-a57f-4cda-98ca-3fa791f61964/util/0.log" Jan 28 16:11:06 crc kubenswrapper[4871]: I0128 16:11:06.581080 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfx28l_a14e6292-a57f-4cda-98ca-3fa791f61964/extract/0.log" Jan 28 16:11:06 crc kubenswrapper[4871]: I0128 16:11:06.749292 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b_9516ba5d-c370-480d-8ab7-5e90e188fc9b/util/0.log" Jan 28 16:11:07 crc kubenswrapper[4871]: I0128 16:11:07.009369 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b_9516ba5d-c370-480d-8ab7-5e90e188fc9b/util/0.log" Jan 28 16:11:07 crc kubenswrapper[4871]: I0128 16:11:07.012083 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b_9516ba5d-c370-480d-8ab7-5e90e188fc9b/pull/0.log" Jan 28 16:11:07 crc kubenswrapper[4871]: I0128 16:11:07.027132 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b_9516ba5d-c370-480d-8ab7-5e90e188fc9b/pull/0.log" Jan 28 16:11:07 crc kubenswrapper[4871]: I0128 16:11:07.185558 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b_9516ba5d-c370-480d-8ab7-5e90e188fc9b/util/0.log" Jan 28 16:11:07 crc kubenswrapper[4871]: I0128 16:11:07.187655 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b_9516ba5d-c370-480d-8ab7-5e90e188fc9b/pull/0.log" Jan 28 16:11:07 crc kubenswrapper[4871]: I0128 16:11:07.225752 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71362k9b_9516ba5d-c370-480d-8ab7-5e90e188fc9b/extract/0.log" Jan 28 16:11:07 crc kubenswrapper[4871]: I0128 16:11:07.364942 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-twk99_7db86131-17b1-4d13-9a6b-469419099f0e/extract-utilities/0.log" Jan 28 16:11:07 crc kubenswrapper[4871]: I0128 16:11:07.545455 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-twk99_7db86131-17b1-4d13-9a6b-469419099f0e/extract-content/0.log" Jan 28 16:11:07 crc kubenswrapper[4871]: I0128 16:11:07.567735 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-twk99_7db86131-17b1-4d13-9a6b-469419099f0e/extract-utilities/0.log" Jan 28 16:11:07 crc kubenswrapper[4871]: I0128 16:11:07.573362 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-twk99_7db86131-17b1-4d13-9a6b-469419099f0e/extract-content/0.log" Jan 28 16:11:07 crc kubenswrapper[4871]: I0128 16:11:07.738821 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-twk99_7db86131-17b1-4d13-9a6b-469419099f0e/extract-utilities/0.log" Jan 28 16:11:07 crc kubenswrapper[4871]: I0128 16:11:07.780983 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-twk99_7db86131-17b1-4d13-9a6b-469419099f0e/extract-content/0.log" Jan 28 16:11:07 crc kubenswrapper[4871]: I0128 16:11:07.903386 4871 scope.go:117] "RemoveContainer" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" Jan 28 16:11:07 crc kubenswrapper[4871]: E0128 16:11:07.903658 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:11:08 crc kubenswrapper[4871]: I0128 16:11:08.000454 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lhxxb_6ff5567c-418f-4f43-9839-373c20d07017/extract-utilities/0.log" Jan 28 16:11:08 crc kubenswrapper[4871]: I0128 16:11:08.250223 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lhxxb_6ff5567c-418f-4f43-9839-373c20d07017/extract-utilities/0.log" Jan 28 16:11:08 crc kubenswrapper[4871]: I0128 16:11:08.282698 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lhxxb_6ff5567c-418f-4f43-9839-373c20d07017/extract-content/0.log" Jan 28 16:11:08 crc kubenswrapper[4871]: I0128 16:11:08.293079 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-twk99_7db86131-17b1-4d13-9a6b-469419099f0e/registry-server/0.log" Jan 28 16:11:08 crc kubenswrapper[4871]: I0128 16:11:08.299260 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lhxxb_6ff5567c-418f-4f43-9839-373c20d07017/extract-content/0.log" Jan 28 16:11:08 crc kubenswrapper[4871]: I0128 16:11:08.493970 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lhxxb_6ff5567c-418f-4f43-9839-373c20d07017/extract-utilities/0.log" Jan 28 16:11:08 crc kubenswrapper[4871]: I0128 16:11:08.507721 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lhxxb_6ff5567c-418f-4f43-9839-373c20d07017/extract-content/0.log" Jan 28 16:11:08 crc kubenswrapper[4871]: I0128 16:11:08.808213 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4zcxk_e0205aa8-3e10-465a-8f2a-26a165b8b32e/extract-utilities/0.log" Jan 28 16:11:08 crc kubenswrapper[4871]: I0128 16:11:08.906765 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-x5bm6_9487509e-3495-440b-9698-6669be6a0d5a/marketplace-operator/0.log" Jan 28 16:11:09 crc kubenswrapper[4871]: I0128 16:11:09.100492 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4zcxk_e0205aa8-3e10-465a-8f2a-26a165b8b32e/extract-utilities/0.log" Jan 28 16:11:09 crc kubenswrapper[4871]: I0128 16:11:09.124490 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lhxxb_6ff5567c-418f-4f43-9839-373c20d07017/registry-server/0.log" Jan 28 16:11:09 crc kubenswrapper[4871]: I0128 16:11:09.179365 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4zcxk_e0205aa8-3e10-465a-8f2a-26a165b8b32e/extract-content/0.log" Jan 28 16:11:09 crc kubenswrapper[4871]: I0128 16:11:09.197177 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4zcxk_e0205aa8-3e10-465a-8f2a-26a165b8b32e/extract-content/0.log" Jan 28 16:11:09 crc kubenswrapper[4871]: I0128 16:11:09.331446 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4zcxk_e0205aa8-3e10-465a-8f2a-26a165b8b32e/extract-utilities/0.log" Jan 28 16:11:09 crc kubenswrapper[4871]: I0128 16:11:09.347921 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4zcxk_e0205aa8-3e10-465a-8f2a-26a165b8b32e/extract-content/0.log" Jan 28 16:11:09 crc kubenswrapper[4871]: I0128 16:11:09.524218 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4zcxk_e0205aa8-3e10-465a-8f2a-26a165b8b32e/registry-server/0.log" Jan 28 16:11:09 crc kubenswrapper[4871]: I0128 16:11:09.570779 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4hqk6_3bc56629-24ba-4be9-8ec1-1ac434eed1e9/extract-utilities/0.log" Jan 28 16:11:09 crc kubenswrapper[4871]: I0128 16:11:09.766602 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4hqk6_3bc56629-24ba-4be9-8ec1-1ac434eed1e9/extract-utilities/0.log" Jan 28 16:11:09 crc kubenswrapper[4871]: I0128 16:11:09.774483 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4hqk6_3bc56629-24ba-4be9-8ec1-1ac434eed1e9/extract-content/0.log" Jan 28 16:11:09 crc kubenswrapper[4871]: I0128 16:11:09.791382 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4hqk6_3bc56629-24ba-4be9-8ec1-1ac434eed1e9/extract-content/0.log" Jan 28 16:11:09 crc kubenswrapper[4871]: I0128 16:11:09.965572 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4hqk6_3bc56629-24ba-4be9-8ec1-1ac434eed1e9/extract-utilities/0.log" Jan 28 16:11:09 crc kubenswrapper[4871]: I0128 16:11:09.977280 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4hqk6_3bc56629-24ba-4be9-8ec1-1ac434eed1e9/extract-content/0.log" Jan 28 16:11:10 crc kubenswrapper[4871]: I0128 16:11:10.228358 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4hqk6_3bc56629-24ba-4be9-8ec1-1ac434eed1e9/registry-server/0.log" Jan 28 16:11:19 crc kubenswrapper[4871]: I0128 16:11:19.904237 4871 scope.go:117] "RemoveContainer" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" Jan 28 16:11:19 crc kubenswrapper[4871]: E0128 16:11:19.905299 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:11:34 crc kubenswrapper[4871]: I0128 16:11:34.904898 4871 scope.go:117] "RemoveContainer" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" Jan 28 16:11:34 crc kubenswrapper[4871]: E0128 16:11:34.907343 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:11:48 crc kubenswrapper[4871]: I0128 16:11:48.911617 4871 scope.go:117] "RemoveContainer" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" Jan 28 16:11:48 crc kubenswrapper[4871]: E0128 16:11:48.912710 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:11:59 crc kubenswrapper[4871]: I0128 16:11:59.904578 4871 scope.go:117] "RemoveContainer" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" Jan 28 16:11:59 crc kubenswrapper[4871]: E0128 16:11:59.905890 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:12:11 crc kubenswrapper[4871]: I0128 16:12:11.904402 4871 scope.go:117] "RemoveContainer" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" Jan 28 16:12:11 crc kubenswrapper[4871]: E0128 16:12:11.905156 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:12:23 crc kubenswrapper[4871]: I0128 16:12:23.916938 4871 scope.go:117] "RemoveContainer" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" Jan 28 16:12:23 crc kubenswrapper[4871]: E0128 16:12:23.917681 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:12:36 crc kubenswrapper[4871]: I0128 16:12:36.903530 4871 scope.go:117] "RemoveContainer" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" Jan 28 16:12:36 crc kubenswrapper[4871]: E0128 16:12:36.904263 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:12:44 crc kubenswrapper[4871]: I0128 16:12:44.810859 4871 generic.go:334] "Generic (PLEG): container finished" podID="f662252f-c0a4-4ed5-8c93-67468b6b026c" containerID="d3b6721e5effce2e7356a1e5142331aa05506ec7edd9319f486a3752fae1f127" exitCode=0 Jan 28 16:12:44 crc kubenswrapper[4871]: I0128 16:12:44.810964 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jfvkh/must-gather-cbftf" event={"ID":"f662252f-c0a4-4ed5-8c93-67468b6b026c","Type":"ContainerDied","Data":"d3b6721e5effce2e7356a1e5142331aa05506ec7edd9319f486a3752fae1f127"} Jan 28 16:12:44 crc kubenswrapper[4871]: I0128 16:12:44.811996 4871 scope.go:117] "RemoveContainer" containerID="d3b6721e5effce2e7356a1e5142331aa05506ec7edd9319f486a3752fae1f127" Jan 28 16:12:44 crc kubenswrapper[4871]: I0128 16:12:44.873139 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jfvkh_must-gather-cbftf_f662252f-c0a4-4ed5-8c93-67468b6b026c/gather/0.log" Jan 28 16:12:50 crc kubenswrapper[4871]: I0128 16:12:50.904941 4871 scope.go:117] "RemoveContainer" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" Jan 28 16:12:50 crc kubenswrapper[4871]: E0128 16:12:50.905500 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:12:52 crc kubenswrapper[4871]: I0128 16:12:52.877966 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jfvkh/must-gather-cbftf"] Jan 28 16:12:52 crc kubenswrapper[4871]: I0128 16:12:52.878534 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-jfvkh/must-gather-cbftf" podUID="f662252f-c0a4-4ed5-8c93-67468b6b026c" containerName="copy" containerID="cri-o://81089ab5611e439396d66f6b46e9f8e9d7474e31cb68f18a24a5fec6094a2035" gracePeriod=2 Jan 28 16:12:52 crc kubenswrapper[4871]: I0128 16:12:52.891243 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jfvkh/must-gather-cbftf"] Jan 28 16:12:53 crc kubenswrapper[4871]: I0128 16:12:53.884282 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jfvkh_must-gather-cbftf_f662252f-c0a4-4ed5-8c93-67468b6b026c/copy/0.log" Jan 28 16:12:53 crc kubenswrapper[4871]: I0128 16:12:53.885078 4871 generic.go:334] "Generic (PLEG): container finished" podID="f662252f-c0a4-4ed5-8c93-67468b6b026c" containerID="81089ab5611e439396d66f6b46e9f8e9d7474e31cb68f18a24a5fec6094a2035" exitCode=143 Jan 28 16:12:54 crc kubenswrapper[4871]: I0128 16:12:54.030902 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jfvkh_must-gather-cbftf_f662252f-c0a4-4ed5-8c93-67468b6b026c/copy/0.log" Jan 28 16:12:54 crc kubenswrapper[4871]: I0128 16:12:54.032192 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfvkh/must-gather-cbftf" Jan 28 16:12:54 crc kubenswrapper[4871]: I0128 16:12:54.139372 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f662252f-c0a4-4ed5-8c93-67468b6b026c-must-gather-output\") pod \"f662252f-c0a4-4ed5-8c93-67468b6b026c\" (UID: \"f662252f-c0a4-4ed5-8c93-67468b6b026c\") " Jan 28 16:12:54 crc kubenswrapper[4871]: I0128 16:12:54.139747 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgnjw\" (UniqueName: \"kubernetes.io/projected/f662252f-c0a4-4ed5-8c93-67468b6b026c-kube-api-access-qgnjw\") pod \"f662252f-c0a4-4ed5-8c93-67468b6b026c\" (UID: \"f662252f-c0a4-4ed5-8c93-67468b6b026c\") " Jan 28 16:12:54 crc kubenswrapper[4871]: I0128 16:12:54.146694 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f662252f-c0a4-4ed5-8c93-67468b6b026c-kube-api-access-qgnjw" (OuterVolumeSpecName: "kube-api-access-qgnjw") pod "f662252f-c0a4-4ed5-8c93-67468b6b026c" (UID: "f662252f-c0a4-4ed5-8c93-67468b6b026c"). InnerVolumeSpecName "kube-api-access-qgnjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:12:54 crc kubenswrapper[4871]: I0128 16:12:54.242445 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgnjw\" (UniqueName: \"kubernetes.io/projected/f662252f-c0a4-4ed5-8c93-67468b6b026c-kube-api-access-qgnjw\") on node \"crc\" DevicePath \"\"" Jan 28 16:12:54 crc kubenswrapper[4871]: I0128 16:12:54.246390 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f662252f-c0a4-4ed5-8c93-67468b6b026c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f662252f-c0a4-4ed5-8c93-67468b6b026c" (UID: "f662252f-c0a4-4ed5-8c93-67468b6b026c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:12:54 crc kubenswrapper[4871]: I0128 16:12:54.344942 4871 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f662252f-c0a4-4ed5-8c93-67468b6b026c-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 28 16:12:54 crc kubenswrapper[4871]: I0128 16:12:54.892474 4871 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jfvkh_must-gather-cbftf_f662252f-c0a4-4ed5-8c93-67468b6b026c/copy/0.log" Jan 28 16:12:54 crc kubenswrapper[4871]: I0128 16:12:54.893194 4871 scope.go:117] "RemoveContainer" containerID="81089ab5611e439396d66f6b46e9f8e9d7474e31cb68f18a24a5fec6094a2035" Jan 28 16:12:54 crc kubenswrapper[4871]: I0128 16:12:54.893305 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfvkh/must-gather-cbftf" Jan 28 16:12:54 crc kubenswrapper[4871]: I0128 16:12:54.914252 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f662252f-c0a4-4ed5-8c93-67468b6b026c" path="/var/lib/kubelet/pods/f662252f-c0a4-4ed5-8c93-67468b6b026c/volumes" Jan 28 16:12:54 crc kubenswrapper[4871]: I0128 16:12:54.915784 4871 scope.go:117] "RemoveContainer" containerID="d3b6721e5effce2e7356a1e5142331aa05506ec7edd9319f486a3752fae1f127" Jan 28 16:12:55 crc kubenswrapper[4871]: I0128 16:12:55.650915 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j7fwk"] Jan 28 16:12:55 crc kubenswrapper[4871]: E0128 16:12:55.651258 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f662252f-c0a4-4ed5-8c93-67468b6b026c" containerName="gather" Jan 28 16:12:55 crc kubenswrapper[4871]: I0128 16:12:55.651276 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="f662252f-c0a4-4ed5-8c93-67468b6b026c" containerName="gather" Jan 28 16:12:55 crc kubenswrapper[4871]: E0128 16:12:55.651305 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f662252f-c0a4-4ed5-8c93-67468b6b026c" containerName="copy" Jan 28 16:12:55 crc kubenswrapper[4871]: I0128 16:12:55.651313 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="f662252f-c0a4-4ed5-8c93-67468b6b026c" containerName="copy" Jan 28 16:12:55 crc kubenswrapper[4871]: E0128 16:12:55.651324 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22b0cdbe-3a6d-4049-bd71-942488121294" containerName="container-00" Jan 28 16:12:55 crc kubenswrapper[4871]: I0128 16:12:55.651330 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="22b0cdbe-3a6d-4049-bd71-942488121294" containerName="container-00" Jan 28 16:12:55 crc kubenswrapper[4871]: I0128 16:12:55.651481 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="f662252f-c0a4-4ed5-8c93-67468b6b026c" containerName="copy" Jan 28 16:12:55 crc kubenswrapper[4871]: I0128 16:12:55.651500 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="f662252f-c0a4-4ed5-8c93-67468b6b026c" containerName="gather" Jan 28 16:12:55 crc kubenswrapper[4871]: I0128 16:12:55.651512 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="22b0cdbe-3a6d-4049-bd71-942488121294" containerName="container-00" Jan 28 16:12:55 crc kubenswrapper[4871]: I0128 16:12:55.652732 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j7fwk" Jan 28 16:12:55 crc kubenswrapper[4871]: I0128 16:12:55.660799 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j7fwk"] Jan 28 16:12:55 crc kubenswrapper[4871]: I0128 16:12:55.766518 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef0974b8-da83-44d4-9717-1ef400d5cf44-catalog-content\") pod \"redhat-operators-j7fwk\" (UID: \"ef0974b8-da83-44d4-9717-1ef400d5cf44\") " pod="openshift-marketplace/redhat-operators-j7fwk" Jan 28 16:12:55 crc kubenswrapper[4871]: I0128 16:12:55.766609 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmpsm\" (UniqueName: \"kubernetes.io/projected/ef0974b8-da83-44d4-9717-1ef400d5cf44-kube-api-access-bmpsm\") pod \"redhat-operators-j7fwk\" (UID: \"ef0974b8-da83-44d4-9717-1ef400d5cf44\") " pod="openshift-marketplace/redhat-operators-j7fwk" Jan 28 16:12:55 crc kubenswrapper[4871]: I0128 16:12:55.766867 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef0974b8-da83-44d4-9717-1ef400d5cf44-utilities\") pod \"redhat-operators-j7fwk\" (UID: \"ef0974b8-da83-44d4-9717-1ef400d5cf44\") " pod="openshift-marketplace/redhat-operators-j7fwk" Jan 28 16:12:55 crc kubenswrapper[4871]: I0128 16:12:55.868785 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef0974b8-da83-44d4-9717-1ef400d5cf44-utilities\") pod \"redhat-operators-j7fwk\" (UID: \"ef0974b8-da83-44d4-9717-1ef400d5cf44\") " pod="openshift-marketplace/redhat-operators-j7fwk" Jan 28 16:12:55 crc kubenswrapper[4871]: I0128 16:12:55.869121 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef0974b8-da83-44d4-9717-1ef400d5cf44-catalog-content\") pod \"redhat-operators-j7fwk\" (UID: \"ef0974b8-da83-44d4-9717-1ef400d5cf44\") " pod="openshift-marketplace/redhat-operators-j7fwk" Jan 28 16:12:55 crc kubenswrapper[4871]: I0128 16:12:55.869291 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmpsm\" (UniqueName: \"kubernetes.io/projected/ef0974b8-da83-44d4-9717-1ef400d5cf44-kube-api-access-bmpsm\") pod \"redhat-operators-j7fwk\" (UID: \"ef0974b8-da83-44d4-9717-1ef400d5cf44\") " pod="openshift-marketplace/redhat-operators-j7fwk" Jan 28 16:12:55 crc kubenswrapper[4871]: I0128 16:12:55.869697 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef0974b8-da83-44d4-9717-1ef400d5cf44-utilities\") pod \"redhat-operators-j7fwk\" (UID: \"ef0974b8-da83-44d4-9717-1ef400d5cf44\") " pod="openshift-marketplace/redhat-operators-j7fwk" Jan 28 16:12:55 crc kubenswrapper[4871]: I0128 16:12:55.870072 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef0974b8-da83-44d4-9717-1ef400d5cf44-catalog-content\") pod \"redhat-operators-j7fwk\" (UID: \"ef0974b8-da83-44d4-9717-1ef400d5cf44\") " pod="openshift-marketplace/redhat-operators-j7fwk" Jan 28 16:12:55 crc kubenswrapper[4871]: I0128 16:12:55.913512 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmpsm\" (UniqueName: \"kubernetes.io/projected/ef0974b8-da83-44d4-9717-1ef400d5cf44-kube-api-access-bmpsm\") pod \"redhat-operators-j7fwk\" (UID: \"ef0974b8-da83-44d4-9717-1ef400d5cf44\") " pod="openshift-marketplace/redhat-operators-j7fwk" Jan 28 16:12:55 crc kubenswrapper[4871]: I0128 16:12:55.981305 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j7fwk" Jan 28 16:12:56 crc kubenswrapper[4871]: I0128 16:12:56.546009 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j7fwk"] Jan 28 16:12:56 crc kubenswrapper[4871]: I0128 16:12:56.919221 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j7fwk" event={"ID":"ef0974b8-da83-44d4-9717-1ef400d5cf44","Type":"ContainerStarted","Data":"49f1e9d5e03cea1bc7b38e756b915fa8212f15671b9003e6ac951922e9376dc5"} Jan 28 16:12:57 crc kubenswrapper[4871]: I0128 16:12:57.056136 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rqp6r"] Jan 28 16:12:57 crc kubenswrapper[4871]: I0128 16:12:57.057706 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rqp6r" Jan 28 16:12:57 crc kubenswrapper[4871]: I0128 16:12:57.091278 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84187080-59c1-404f-8929-26aa7cc87e29-catalog-content\") pod \"redhat-marketplace-rqp6r\" (UID: \"84187080-59c1-404f-8929-26aa7cc87e29\") " pod="openshift-marketplace/redhat-marketplace-rqp6r" Jan 28 16:12:57 crc kubenswrapper[4871]: I0128 16:12:57.091370 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgnkj\" (UniqueName: \"kubernetes.io/projected/84187080-59c1-404f-8929-26aa7cc87e29-kube-api-access-hgnkj\") pod \"redhat-marketplace-rqp6r\" (UID: \"84187080-59c1-404f-8929-26aa7cc87e29\") " pod="openshift-marketplace/redhat-marketplace-rqp6r" Jan 28 16:12:57 crc kubenswrapper[4871]: I0128 16:12:57.091395 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84187080-59c1-404f-8929-26aa7cc87e29-utilities\") pod \"redhat-marketplace-rqp6r\" (UID: \"84187080-59c1-404f-8929-26aa7cc87e29\") " pod="openshift-marketplace/redhat-marketplace-rqp6r" Jan 28 16:12:57 crc kubenswrapper[4871]: I0128 16:12:57.096817 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rqp6r"] Jan 28 16:12:57 crc kubenswrapper[4871]: I0128 16:12:57.193542 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84187080-59c1-404f-8929-26aa7cc87e29-catalog-content\") pod \"redhat-marketplace-rqp6r\" (UID: \"84187080-59c1-404f-8929-26aa7cc87e29\") " pod="openshift-marketplace/redhat-marketplace-rqp6r" Jan 28 16:12:57 crc kubenswrapper[4871]: I0128 16:12:57.193631 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgnkj\" (UniqueName: \"kubernetes.io/projected/84187080-59c1-404f-8929-26aa7cc87e29-kube-api-access-hgnkj\") pod \"redhat-marketplace-rqp6r\" (UID: \"84187080-59c1-404f-8929-26aa7cc87e29\") " pod="openshift-marketplace/redhat-marketplace-rqp6r" Jan 28 16:12:57 crc kubenswrapper[4871]: I0128 16:12:57.193654 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84187080-59c1-404f-8929-26aa7cc87e29-utilities\") pod \"redhat-marketplace-rqp6r\" (UID: \"84187080-59c1-404f-8929-26aa7cc87e29\") " pod="openshift-marketplace/redhat-marketplace-rqp6r" Jan 28 16:12:57 crc kubenswrapper[4871]: I0128 16:12:57.194123 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84187080-59c1-404f-8929-26aa7cc87e29-utilities\") pod \"redhat-marketplace-rqp6r\" (UID: \"84187080-59c1-404f-8929-26aa7cc87e29\") " pod="openshift-marketplace/redhat-marketplace-rqp6r" Jan 28 16:12:57 crc kubenswrapper[4871]: I0128 16:12:57.194381 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84187080-59c1-404f-8929-26aa7cc87e29-catalog-content\") pod \"redhat-marketplace-rqp6r\" (UID: \"84187080-59c1-404f-8929-26aa7cc87e29\") " pod="openshift-marketplace/redhat-marketplace-rqp6r" Jan 28 16:12:57 crc kubenswrapper[4871]: I0128 16:12:57.233643 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgnkj\" (UniqueName: \"kubernetes.io/projected/84187080-59c1-404f-8929-26aa7cc87e29-kube-api-access-hgnkj\") pod \"redhat-marketplace-rqp6r\" (UID: \"84187080-59c1-404f-8929-26aa7cc87e29\") " pod="openshift-marketplace/redhat-marketplace-rqp6r" Jan 28 16:12:57 crc kubenswrapper[4871]: I0128 16:12:57.374248 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rqp6r" Jan 28 16:12:57 crc kubenswrapper[4871]: I0128 16:12:57.838947 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rqp6r"] Jan 28 16:12:57 crc kubenswrapper[4871]: W0128 16:12:57.839608 4871 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84187080_59c1_404f_8929_26aa7cc87e29.slice/crio-747864fd641f1b203ad9cd1e877f1cf508e4daf857eff5f835c270a4340140d7 WatchSource:0}: Error finding container 747864fd641f1b203ad9cd1e877f1cf508e4daf857eff5f835c270a4340140d7: Status 404 returned error can't find the container with id 747864fd641f1b203ad9cd1e877f1cf508e4daf857eff5f835c270a4340140d7 Jan 28 16:12:57 crc kubenswrapper[4871]: I0128 16:12:57.926348 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqp6r" event={"ID":"84187080-59c1-404f-8929-26aa7cc87e29","Type":"ContainerStarted","Data":"747864fd641f1b203ad9cd1e877f1cf508e4daf857eff5f835c270a4340140d7"} Jan 28 16:12:57 crc kubenswrapper[4871]: I0128 16:12:57.927670 4871 generic.go:334] "Generic (PLEG): container finished" podID="ef0974b8-da83-44d4-9717-1ef400d5cf44" containerID="820c0b5283324e20dfe34537928381507d8aef1de34a2ccb8624f65ad20401ad" exitCode=0 Jan 28 16:12:57 crc kubenswrapper[4871]: I0128 16:12:57.927698 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j7fwk" event={"ID":"ef0974b8-da83-44d4-9717-1ef400d5cf44","Type":"ContainerDied","Data":"820c0b5283324e20dfe34537928381507d8aef1de34a2ccb8624f65ad20401ad"} Jan 28 16:12:57 crc kubenswrapper[4871]: I0128 16:12:57.929698 4871 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 16:12:58 crc kubenswrapper[4871]: I0128 16:12:58.938456 4871 generic.go:334] "Generic (PLEG): container finished" podID="84187080-59c1-404f-8929-26aa7cc87e29" containerID="644953c82e1e09124bf406f061691c116eb2c23aec3c8fec4249bf0198d10292" exitCode=0 Jan 28 16:12:58 crc kubenswrapper[4871]: I0128 16:12:58.938736 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqp6r" event={"ID":"84187080-59c1-404f-8929-26aa7cc87e29","Type":"ContainerDied","Data":"644953c82e1e09124bf406f061691c116eb2c23aec3c8fec4249bf0198d10292"} Jan 28 16:13:05 crc kubenswrapper[4871]: I0128 16:13:05.904034 4871 scope.go:117] "RemoveContainer" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" Jan 28 16:13:05 crc kubenswrapper[4871]: E0128 16:13:05.904777 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:13:08 crc kubenswrapper[4871]: I0128 16:13:08.052946 4871 generic.go:334] "Generic (PLEG): container finished" podID="84187080-59c1-404f-8929-26aa7cc87e29" containerID="e0b1900a1391db9326fa98dd53357af62ba56b3116aad3ea779395533a8d695d" exitCode=0 Jan 28 16:13:08 crc kubenswrapper[4871]: I0128 16:13:08.053031 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqp6r" event={"ID":"84187080-59c1-404f-8929-26aa7cc87e29","Type":"ContainerDied","Data":"e0b1900a1391db9326fa98dd53357af62ba56b3116aad3ea779395533a8d695d"} Jan 28 16:13:08 crc kubenswrapper[4871]: I0128 16:13:08.055755 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j7fwk" event={"ID":"ef0974b8-da83-44d4-9717-1ef400d5cf44","Type":"ContainerStarted","Data":"36802b21af5e814a02e4b73097103dd95c2db1960d7d04553ece3365bc0e8029"} Jan 28 16:13:09 crc kubenswrapper[4871]: I0128 16:13:09.065303 4871 generic.go:334] "Generic (PLEG): container finished" podID="ef0974b8-da83-44d4-9717-1ef400d5cf44" containerID="36802b21af5e814a02e4b73097103dd95c2db1960d7d04553ece3365bc0e8029" exitCode=0 Jan 28 16:13:09 crc kubenswrapper[4871]: I0128 16:13:09.065372 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j7fwk" event={"ID":"ef0974b8-da83-44d4-9717-1ef400d5cf44","Type":"ContainerDied","Data":"36802b21af5e814a02e4b73097103dd95c2db1960d7d04553ece3365bc0e8029"} Jan 28 16:13:11 crc kubenswrapper[4871]: I0128 16:13:11.090137 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqp6r" event={"ID":"84187080-59c1-404f-8929-26aa7cc87e29","Type":"ContainerStarted","Data":"39520b495d36581cccd24f9b610631dceec9b1cdafbcf129e698fa0e1277d6cd"} Jan 28 16:13:11 crc kubenswrapper[4871]: I0128 16:13:11.116352 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rqp6r" podStartSLOduration=2.837016573 podStartE2EDuration="14.116330879s" podCreationTimestamp="2026-01-28 16:12:57 +0000 UTC" firstStartedPulling="2026-01-28 16:12:58.940178052 +0000 UTC m=+3330.836016364" lastFinishedPulling="2026-01-28 16:13:10.219492348 +0000 UTC m=+3342.115330670" observedRunningTime="2026-01-28 16:13:11.113549332 +0000 UTC m=+3343.009387664" watchObservedRunningTime="2026-01-28 16:13:11.116330879 +0000 UTC m=+3343.012169201" Jan 28 16:13:12 crc kubenswrapper[4871]: I0128 16:13:12.101844 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j7fwk" event={"ID":"ef0974b8-da83-44d4-9717-1ef400d5cf44","Type":"ContainerStarted","Data":"549fbcab358221d0621565f807c97612f7aa035dbc4667e0d792a7f20611e3e8"} Jan 28 16:13:12 crc kubenswrapper[4871]: I0128 16:13:12.138653 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j7fwk" podStartSLOduration=3.735767675 podStartE2EDuration="17.138628472s" podCreationTimestamp="2026-01-28 16:12:55 +0000 UTC" firstStartedPulling="2026-01-28 16:12:57.929348288 +0000 UTC m=+3329.825186610" lastFinishedPulling="2026-01-28 16:13:11.332209085 +0000 UTC m=+3343.228047407" observedRunningTime="2026-01-28 16:13:12.130456375 +0000 UTC m=+3344.026294717" watchObservedRunningTime="2026-01-28 16:13:12.138628472 +0000 UTC m=+3344.034466834" Jan 28 16:13:15 crc kubenswrapper[4871]: I0128 16:13:15.983075 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j7fwk" Jan 28 16:13:15 crc kubenswrapper[4871]: I0128 16:13:15.983698 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j7fwk" Jan 28 16:13:17 crc kubenswrapper[4871]: I0128 16:13:17.029938 4871 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j7fwk" podUID="ef0974b8-da83-44d4-9717-1ef400d5cf44" containerName="registry-server" probeResult="failure" output=< Jan 28 16:13:17 crc kubenswrapper[4871]: timeout: failed to connect service ":50051" within 1s Jan 28 16:13:17 crc kubenswrapper[4871]: > Jan 28 16:13:17 crc kubenswrapper[4871]: I0128 16:13:17.375259 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rqp6r" Jan 28 16:13:17 crc kubenswrapper[4871]: I0128 16:13:17.375317 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rqp6r" Jan 28 16:13:17 crc kubenswrapper[4871]: I0128 16:13:17.420270 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rqp6r" Jan 28 16:13:18 crc kubenswrapper[4871]: I0128 16:13:18.208099 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rqp6r" Jan 28 16:13:18 crc kubenswrapper[4871]: I0128 16:13:18.254313 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rqp6r"] Jan 28 16:13:20 crc kubenswrapper[4871]: I0128 16:13:20.158366 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rqp6r" podUID="84187080-59c1-404f-8929-26aa7cc87e29" containerName="registry-server" containerID="cri-o://39520b495d36581cccd24f9b610631dceec9b1cdafbcf129e698fa0e1277d6cd" gracePeriod=2 Jan 28 16:13:20 crc kubenswrapper[4871]: I0128 16:13:20.903497 4871 scope.go:117] "RemoveContainer" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" Jan 28 16:13:20 crc kubenswrapper[4871]: E0128 16:13:20.904069 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:13:22 crc kubenswrapper[4871]: I0128 16:13:22.189670 4871 generic.go:334] "Generic (PLEG): container finished" podID="84187080-59c1-404f-8929-26aa7cc87e29" containerID="39520b495d36581cccd24f9b610631dceec9b1cdafbcf129e698fa0e1277d6cd" exitCode=0 Jan 28 16:13:22 crc kubenswrapper[4871]: I0128 16:13:22.189831 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqp6r" event={"ID":"84187080-59c1-404f-8929-26aa7cc87e29","Type":"ContainerDied","Data":"39520b495d36581cccd24f9b610631dceec9b1cdafbcf129e698fa0e1277d6cd"} Jan 28 16:13:22 crc kubenswrapper[4871]: I0128 16:13:22.332180 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rqp6r" Jan 28 16:13:22 crc kubenswrapper[4871]: I0128 16:13:22.450614 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84187080-59c1-404f-8929-26aa7cc87e29-catalog-content\") pod \"84187080-59c1-404f-8929-26aa7cc87e29\" (UID: \"84187080-59c1-404f-8929-26aa7cc87e29\") " Jan 28 16:13:22 crc kubenswrapper[4871]: I0128 16:13:22.450802 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgnkj\" (UniqueName: \"kubernetes.io/projected/84187080-59c1-404f-8929-26aa7cc87e29-kube-api-access-hgnkj\") pod \"84187080-59c1-404f-8929-26aa7cc87e29\" (UID: \"84187080-59c1-404f-8929-26aa7cc87e29\") " Jan 28 16:13:22 crc kubenswrapper[4871]: I0128 16:13:22.450858 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84187080-59c1-404f-8929-26aa7cc87e29-utilities\") pod \"84187080-59c1-404f-8929-26aa7cc87e29\" (UID: \"84187080-59c1-404f-8929-26aa7cc87e29\") " Jan 28 16:13:22 crc kubenswrapper[4871]: I0128 16:13:22.451660 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84187080-59c1-404f-8929-26aa7cc87e29-utilities" (OuterVolumeSpecName: "utilities") pod "84187080-59c1-404f-8929-26aa7cc87e29" (UID: "84187080-59c1-404f-8929-26aa7cc87e29"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:13:22 crc kubenswrapper[4871]: I0128 16:13:22.457890 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84187080-59c1-404f-8929-26aa7cc87e29-kube-api-access-hgnkj" (OuterVolumeSpecName: "kube-api-access-hgnkj") pod "84187080-59c1-404f-8929-26aa7cc87e29" (UID: "84187080-59c1-404f-8929-26aa7cc87e29"). InnerVolumeSpecName "kube-api-access-hgnkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:13:22 crc kubenswrapper[4871]: I0128 16:13:22.553376 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgnkj\" (UniqueName: \"kubernetes.io/projected/84187080-59c1-404f-8929-26aa7cc87e29-kube-api-access-hgnkj\") on node \"crc\" DevicePath \"\"" Jan 28 16:13:22 crc kubenswrapper[4871]: I0128 16:13:22.553410 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84187080-59c1-404f-8929-26aa7cc87e29-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:13:23 crc kubenswrapper[4871]: I0128 16:13:23.026795 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84187080-59c1-404f-8929-26aa7cc87e29-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84187080-59c1-404f-8929-26aa7cc87e29" (UID: "84187080-59c1-404f-8929-26aa7cc87e29"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:13:23 crc kubenswrapper[4871]: I0128 16:13:23.061456 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84187080-59c1-404f-8929-26aa7cc87e29-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:13:23 crc kubenswrapper[4871]: I0128 16:13:23.198382 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqp6r" event={"ID":"84187080-59c1-404f-8929-26aa7cc87e29","Type":"ContainerDied","Data":"747864fd641f1b203ad9cd1e877f1cf508e4daf857eff5f835c270a4340140d7"} Jan 28 16:13:23 crc kubenswrapper[4871]: I0128 16:13:23.198437 4871 scope.go:117] "RemoveContainer" containerID="39520b495d36581cccd24f9b610631dceec9b1cdafbcf129e698fa0e1277d6cd" Jan 28 16:13:23 crc kubenswrapper[4871]: I0128 16:13:23.198468 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rqp6r" Jan 28 16:13:23 crc kubenswrapper[4871]: I0128 16:13:23.213291 4871 scope.go:117] "RemoveContainer" containerID="e0b1900a1391db9326fa98dd53357af62ba56b3116aad3ea779395533a8d695d" Jan 28 16:13:23 crc kubenswrapper[4871]: I0128 16:13:23.235168 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rqp6r"] Jan 28 16:13:23 crc kubenswrapper[4871]: I0128 16:13:23.241423 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rqp6r"] Jan 28 16:13:23 crc kubenswrapper[4871]: I0128 16:13:23.242989 4871 scope.go:117] "RemoveContainer" containerID="644953c82e1e09124bf406f061691c116eb2c23aec3c8fec4249bf0198d10292" Jan 28 16:13:24 crc kubenswrapper[4871]: I0128 16:13:24.914225 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84187080-59c1-404f-8929-26aa7cc87e29" path="/var/lib/kubelet/pods/84187080-59c1-404f-8929-26aa7cc87e29/volumes" Jan 28 16:13:26 crc kubenswrapper[4871]: I0128 16:13:26.026293 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j7fwk" Jan 28 16:13:26 crc kubenswrapper[4871]: I0128 16:13:26.073948 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j7fwk" Jan 28 16:13:26 crc kubenswrapper[4871]: I0128 16:13:26.857113 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j7fwk"] Jan 28 16:13:27 crc kubenswrapper[4871]: I0128 16:13:27.228793 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j7fwk" podUID="ef0974b8-da83-44d4-9717-1ef400d5cf44" containerName="registry-server" containerID="cri-o://549fbcab358221d0621565f807c97612f7aa035dbc4667e0d792a7f20611e3e8" gracePeriod=2 Jan 28 16:13:27 crc kubenswrapper[4871]: I0128 16:13:27.832335 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j7fwk" Jan 28 16:13:27 crc kubenswrapper[4871]: I0128 16:13:27.942099 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmpsm\" (UniqueName: \"kubernetes.io/projected/ef0974b8-da83-44d4-9717-1ef400d5cf44-kube-api-access-bmpsm\") pod \"ef0974b8-da83-44d4-9717-1ef400d5cf44\" (UID: \"ef0974b8-da83-44d4-9717-1ef400d5cf44\") " Jan 28 16:13:27 crc kubenswrapper[4871]: I0128 16:13:27.942243 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef0974b8-da83-44d4-9717-1ef400d5cf44-utilities\") pod \"ef0974b8-da83-44d4-9717-1ef400d5cf44\" (UID: \"ef0974b8-da83-44d4-9717-1ef400d5cf44\") " Jan 28 16:13:27 crc kubenswrapper[4871]: I0128 16:13:27.942288 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef0974b8-da83-44d4-9717-1ef400d5cf44-catalog-content\") pod \"ef0974b8-da83-44d4-9717-1ef400d5cf44\" (UID: \"ef0974b8-da83-44d4-9717-1ef400d5cf44\") " Jan 28 16:13:27 crc kubenswrapper[4871]: I0128 16:13:27.943881 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef0974b8-da83-44d4-9717-1ef400d5cf44-utilities" (OuterVolumeSpecName: "utilities") pod "ef0974b8-da83-44d4-9717-1ef400d5cf44" (UID: "ef0974b8-da83-44d4-9717-1ef400d5cf44"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:13:27 crc kubenswrapper[4871]: I0128 16:13:27.948923 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef0974b8-da83-44d4-9717-1ef400d5cf44-kube-api-access-bmpsm" (OuterVolumeSpecName: "kube-api-access-bmpsm") pod "ef0974b8-da83-44d4-9717-1ef400d5cf44" (UID: "ef0974b8-da83-44d4-9717-1ef400d5cf44"). InnerVolumeSpecName "kube-api-access-bmpsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:13:28 crc kubenswrapper[4871]: I0128 16:13:28.046672 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef0974b8-da83-44d4-9717-1ef400d5cf44-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:13:28 crc kubenswrapper[4871]: I0128 16:13:28.046707 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmpsm\" (UniqueName: \"kubernetes.io/projected/ef0974b8-da83-44d4-9717-1ef400d5cf44-kube-api-access-bmpsm\") on node \"crc\" DevicePath \"\"" Jan 28 16:13:28 crc kubenswrapper[4871]: I0128 16:13:28.062562 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef0974b8-da83-44d4-9717-1ef400d5cf44-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef0974b8-da83-44d4-9717-1ef400d5cf44" (UID: "ef0974b8-da83-44d4-9717-1ef400d5cf44"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:13:28 crc kubenswrapper[4871]: I0128 16:13:28.148213 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef0974b8-da83-44d4-9717-1ef400d5cf44-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:13:28 crc kubenswrapper[4871]: I0128 16:13:28.239815 4871 generic.go:334] "Generic (PLEG): container finished" podID="ef0974b8-da83-44d4-9717-1ef400d5cf44" containerID="549fbcab358221d0621565f807c97612f7aa035dbc4667e0d792a7f20611e3e8" exitCode=0 Jan 28 16:13:28 crc kubenswrapper[4871]: I0128 16:13:28.239878 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j7fwk" event={"ID":"ef0974b8-da83-44d4-9717-1ef400d5cf44","Type":"ContainerDied","Data":"549fbcab358221d0621565f807c97612f7aa035dbc4667e0d792a7f20611e3e8"} Jan 28 16:13:28 crc kubenswrapper[4871]: I0128 16:13:28.239924 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j7fwk" event={"ID":"ef0974b8-da83-44d4-9717-1ef400d5cf44","Type":"ContainerDied","Data":"49f1e9d5e03cea1bc7b38e756b915fa8212f15671b9003e6ac951922e9376dc5"} Jan 28 16:13:28 crc kubenswrapper[4871]: I0128 16:13:28.239953 4871 scope.go:117] "RemoveContainer" containerID="549fbcab358221d0621565f807c97612f7aa035dbc4667e0d792a7f20611e3e8" Jan 28 16:13:28 crc kubenswrapper[4871]: I0128 16:13:28.240698 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j7fwk" Jan 28 16:13:28 crc kubenswrapper[4871]: I0128 16:13:28.261707 4871 scope.go:117] "RemoveContainer" containerID="36802b21af5e814a02e4b73097103dd95c2db1960d7d04553ece3365bc0e8029" Jan 28 16:13:28 crc kubenswrapper[4871]: I0128 16:13:28.276868 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j7fwk"] Jan 28 16:13:28 crc kubenswrapper[4871]: I0128 16:13:28.285346 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j7fwk"] Jan 28 16:13:28 crc kubenswrapper[4871]: I0128 16:13:28.300354 4871 scope.go:117] "RemoveContainer" containerID="820c0b5283324e20dfe34537928381507d8aef1de34a2ccb8624f65ad20401ad" Jan 28 16:13:28 crc kubenswrapper[4871]: I0128 16:13:28.323464 4871 scope.go:117] "RemoveContainer" containerID="549fbcab358221d0621565f807c97612f7aa035dbc4667e0d792a7f20611e3e8" Jan 28 16:13:28 crc kubenswrapper[4871]: E0128 16:13:28.324079 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"549fbcab358221d0621565f807c97612f7aa035dbc4667e0d792a7f20611e3e8\": container with ID starting with 549fbcab358221d0621565f807c97612f7aa035dbc4667e0d792a7f20611e3e8 not found: ID does not exist" containerID="549fbcab358221d0621565f807c97612f7aa035dbc4667e0d792a7f20611e3e8" Jan 28 16:13:28 crc kubenswrapper[4871]: I0128 16:13:28.324154 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"549fbcab358221d0621565f807c97612f7aa035dbc4667e0d792a7f20611e3e8"} err="failed to get container status \"549fbcab358221d0621565f807c97612f7aa035dbc4667e0d792a7f20611e3e8\": rpc error: code = NotFound desc = could not find container \"549fbcab358221d0621565f807c97612f7aa035dbc4667e0d792a7f20611e3e8\": container with ID starting with 549fbcab358221d0621565f807c97612f7aa035dbc4667e0d792a7f20611e3e8 not found: ID does not exist" Jan 28 16:13:28 crc kubenswrapper[4871]: I0128 16:13:28.324209 4871 scope.go:117] "RemoveContainer" containerID="36802b21af5e814a02e4b73097103dd95c2db1960d7d04553ece3365bc0e8029" Jan 28 16:13:28 crc kubenswrapper[4871]: E0128 16:13:28.324576 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36802b21af5e814a02e4b73097103dd95c2db1960d7d04553ece3365bc0e8029\": container with ID starting with 36802b21af5e814a02e4b73097103dd95c2db1960d7d04553ece3365bc0e8029 not found: ID does not exist" containerID="36802b21af5e814a02e4b73097103dd95c2db1960d7d04553ece3365bc0e8029" Jan 28 16:13:28 crc kubenswrapper[4871]: I0128 16:13:28.324622 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36802b21af5e814a02e4b73097103dd95c2db1960d7d04553ece3365bc0e8029"} err="failed to get container status \"36802b21af5e814a02e4b73097103dd95c2db1960d7d04553ece3365bc0e8029\": rpc error: code = NotFound desc = could not find container \"36802b21af5e814a02e4b73097103dd95c2db1960d7d04553ece3365bc0e8029\": container with ID starting with 36802b21af5e814a02e4b73097103dd95c2db1960d7d04553ece3365bc0e8029 not found: ID does not exist" Jan 28 16:13:28 crc kubenswrapper[4871]: I0128 16:13:28.324641 4871 scope.go:117] "RemoveContainer" containerID="820c0b5283324e20dfe34537928381507d8aef1de34a2ccb8624f65ad20401ad" Jan 28 16:13:28 crc kubenswrapper[4871]: E0128 16:13:28.325329 4871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"820c0b5283324e20dfe34537928381507d8aef1de34a2ccb8624f65ad20401ad\": container with ID starting with 820c0b5283324e20dfe34537928381507d8aef1de34a2ccb8624f65ad20401ad not found: ID does not exist" containerID="820c0b5283324e20dfe34537928381507d8aef1de34a2ccb8624f65ad20401ad" Jan 28 16:13:28 crc kubenswrapper[4871]: I0128 16:13:28.325392 4871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"820c0b5283324e20dfe34537928381507d8aef1de34a2ccb8624f65ad20401ad"} err="failed to get container status \"820c0b5283324e20dfe34537928381507d8aef1de34a2ccb8624f65ad20401ad\": rpc error: code = NotFound desc = could not find container \"820c0b5283324e20dfe34537928381507d8aef1de34a2ccb8624f65ad20401ad\": container with ID starting with 820c0b5283324e20dfe34537928381507d8aef1de34a2ccb8624f65ad20401ad not found: ID does not exist" Jan 28 16:13:28 crc kubenswrapper[4871]: I0128 16:13:28.927781 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef0974b8-da83-44d4-9717-1ef400d5cf44" path="/var/lib/kubelet/pods/ef0974b8-da83-44d4-9717-1ef400d5cf44/volumes" Jan 28 16:13:33 crc kubenswrapper[4871]: I0128 16:13:33.904331 4871 scope.go:117] "RemoveContainer" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" Jan 28 16:13:33 crc kubenswrapper[4871]: E0128 16:13:33.915172 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:13:47 crc kubenswrapper[4871]: I0128 16:13:47.903713 4871 scope.go:117] "RemoveContainer" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" Jan 28 16:13:47 crc kubenswrapper[4871]: E0128 16:13:47.904578 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:14:00 crc kubenswrapper[4871]: I0128 16:14:00.904374 4871 scope.go:117] "RemoveContainer" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" Jan 28 16:14:00 crc kubenswrapper[4871]: E0128 16:14:00.905091 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:14:14 crc kubenswrapper[4871]: I0128 16:14:14.905153 4871 scope.go:117] "RemoveContainer" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" Jan 28 16:14:14 crc kubenswrapper[4871]: E0128 16:14:14.906035 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:14:25 crc kubenswrapper[4871]: I0128 16:14:25.905066 4871 scope.go:117] "RemoveContainer" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" Jan 28 16:14:25 crc kubenswrapper[4871]: E0128 16:14:25.905860 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:14:30 crc kubenswrapper[4871]: I0128 16:14:30.725866 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gr4gn"] Jan 28 16:14:30 crc kubenswrapper[4871]: E0128 16:14:30.726267 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef0974b8-da83-44d4-9717-1ef400d5cf44" containerName="extract-utilities" Jan 28 16:14:30 crc kubenswrapper[4871]: I0128 16:14:30.726284 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef0974b8-da83-44d4-9717-1ef400d5cf44" containerName="extract-utilities" Jan 28 16:14:30 crc kubenswrapper[4871]: E0128 16:14:30.726310 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84187080-59c1-404f-8929-26aa7cc87e29" containerName="extract-utilities" Jan 28 16:14:30 crc kubenswrapper[4871]: I0128 16:14:30.726318 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="84187080-59c1-404f-8929-26aa7cc87e29" containerName="extract-utilities" Jan 28 16:14:30 crc kubenswrapper[4871]: E0128 16:14:30.726338 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef0974b8-da83-44d4-9717-1ef400d5cf44" containerName="registry-server" Jan 28 16:14:30 crc kubenswrapper[4871]: I0128 16:14:30.726348 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef0974b8-da83-44d4-9717-1ef400d5cf44" containerName="registry-server" Jan 28 16:14:30 crc kubenswrapper[4871]: E0128 16:14:30.726362 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84187080-59c1-404f-8929-26aa7cc87e29" containerName="extract-content" Jan 28 16:14:30 crc kubenswrapper[4871]: I0128 16:14:30.726370 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="84187080-59c1-404f-8929-26aa7cc87e29" containerName="extract-content" Jan 28 16:14:30 crc kubenswrapper[4871]: E0128 16:14:30.726391 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef0974b8-da83-44d4-9717-1ef400d5cf44" containerName="extract-content" Jan 28 16:14:30 crc kubenswrapper[4871]: I0128 16:14:30.726400 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef0974b8-da83-44d4-9717-1ef400d5cf44" containerName="extract-content" Jan 28 16:14:30 crc kubenswrapper[4871]: E0128 16:14:30.726411 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84187080-59c1-404f-8929-26aa7cc87e29" containerName="registry-server" Jan 28 16:14:30 crc kubenswrapper[4871]: I0128 16:14:30.726418 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="84187080-59c1-404f-8929-26aa7cc87e29" containerName="registry-server" Jan 28 16:14:30 crc kubenswrapper[4871]: I0128 16:14:30.726624 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef0974b8-da83-44d4-9717-1ef400d5cf44" containerName="registry-server" Jan 28 16:14:30 crc kubenswrapper[4871]: I0128 16:14:30.726651 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="84187080-59c1-404f-8929-26aa7cc87e29" containerName="registry-server" Jan 28 16:14:30 crc kubenswrapper[4871]: I0128 16:14:30.727961 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gr4gn" Jan 28 16:14:30 crc kubenswrapper[4871]: I0128 16:14:30.752930 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a336ec0-93b4-4c84-a237-ded6c9e8b78f-utilities\") pod \"community-operators-gr4gn\" (UID: \"7a336ec0-93b4-4c84-a237-ded6c9e8b78f\") " pod="openshift-marketplace/community-operators-gr4gn" Jan 28 16:14:30 crc kubenswrapper[4871]: I0128 16:14:30.753043 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a336ec0-93b4-4c84-a237-ded6c9e8b78f-catalog-content\") pod \"community-operators-gr4gn\" (UID: \"7a336ec0-93b4-4c84-a237-ded6c9e8b78f\") " pod="openshift-marketplace/community-operators-gr4gn" Jan 28 16:14:30 crc kubenswrapper[4871]: I0128 16:14:30.753297 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s95dl\" (UniqueName: \"kubernetes.io/projected/7a336ec0-93b4-4c84-a237-ded6c9e8b78f-kube-api-access-s95dl\") pod \"community-operators-gr4gn\" (UID: \"7a336ec0-93b4-4c84-a237-ded6c9e8b78f\") " pod="openshift-marketplace/community-operators-gr4gn" Jan 28 16:14:30 crc kubenswrapper[4871]: I0128 16:14:30.768275 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gr4gn"] Jan 28 16:14:30 crc kubenswrapper[4871]: I0128 16:14:30.856222 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s95dl\" (UniqueName: \"kubernetes.io/projected/7a336ec0-93b4-4c84-a237-ded6c9e8b78f-kube-api-access-s95dl\") pod \"community-operators-gr4gn\" (UID: \"7a336ec0-93b4-4c84-a237-ded6c9e8b78f\") " pod="openshift-marketplace/community-operators-gr4gn" Jan 28 16:14:30 crc kubenswrapper[4871]: I0128 16:14:30.856400 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a336ec0-93b4-4c84-a237-ded6c9e8b78f-utilities\") pod \"community-operators-gr4gn\" (UID: \"7a336ec0-93b4-4c84-a237-ded6c9e8b78f\") " pod="openshift-marketplace/community-operators-gr4gn" Jan 28 16:14:30 crc kubenswrapper[4871]: I0128 16:14:30.856500 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a336ec0-93b4-4c84-a237-ded6c9e8b78f-catalog-content\") pod \"community-operators-gr4gn\" (UID: \"7a336ec0-93b4-4c84-a237-ded6c9e8b78f\") " pod="openshift-marketplace/community-operators-gr4gn" Jan 28 16:14:30 crc kubenswrapper[4871]: I0128 16:14:30.857479 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a336ec0-93b4-4c84-a237-ded6c9e8b78f-catalog-content\") pod \"community-operators-gr4gn\" (UID: \"7a336ec0-93b4-4c84-a237-ded6c9e8b78f\") " pod="openshift-marketplace/community-operators-gr4gn" Jan 28 16:14:30 crc kubenswrapper[4871]: I0128 16:14:30.857944 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a336ec0-93b4-4c84-a237-ded6c9e8b78f-utilities\") pod \"community-operators-gr4gn\" (UID: \"7a336ec0-93b4-4c84-a237-ded6c9e8b78f\") " pod="openshift-marketplace/community-operators-gr4gn" Jan 28 16:14:30 crc kubenswrapper[4871]: I0128 16:14:30.894545 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s95dl\" (UniqueName: \"kubernetes.io/projected/7a336ec0-93b4-4c84-a237-ded6c9e8b78f-kube-api-access-s95dl\") pod \"community-operators-gr4gn\" (UID: \"7a336ec0-93b4-4c84-a237-ded6c9e8b78f\") " pod="openshift-marketplace/community-operators-gr4gn" Jan 28 16:14:31 crc kubenswrapper[4871]: I0128 16:14:31.045426 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gr4gn" Jan 28 16:14:31 crc kubenswrapper[4871]: I0128 16:14:31.362811 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gr4gn"] Jan 28 16:14:31 crc kubenswrapper[4871]: I0128 16:14:31.645741 4871 scope.go:117] "RemoveContainer" containerID="b4acd5714bc952099e5b475461cac0e61371f7e8a10f907939e56cdd3df57346" Jan 28 16:14:31 crc kubenswrapper[4871]: I0128 16:14:31.788828 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gr4gn" event={"ID":"7a336ec0-93b4-4c84-a237-ded6c9e8b78f","Type":"ContainerStarted","Data":"c439f6a2f17cf1cca44c43b9dcb236219d5ca2c391defb586c2d48b1abcb45e6"} Jan 28 16:14:32 crc kubenswrapper[4871]: I0128 16:14:32.804461 4871 generic.go:334] "Generic (PLEG): container finished" podID="7a336ec0-93b4-4c84-a237-ded6c9e8b78f" containerID="5765ec75d9d68e58ef887b1a8a49a37bc0fbe29cb8f86d93e1fe236012fe35b7" exitCode=0 Jan 28 16:14:32 crc kubenswrapper[4871]: I0128 16:14:32.804600 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gr4gn" event={"ID":"7a336ec0-93b4-4c84-a237-ded6c9e8b78f","Type":"ContainerDied","Data":"5765ec75d9d68e58ef887b1a8a49a37bc0fbe29cb8f86d93e1fe236012fe35b7"} Jan 28 16:14:34 crc kubenswrapper[4871]: I0128 16:14:34.824390 4871 generic.go:334] "Generic (PLEG): container finished" podID="7a336ec0-93b4-4c84-a237-ded6c9e8b78f" containerID="840021c80e0cc4502072306587dfb4fe8ce92804de5add1ada50187a27fc686d" exitCode=0 Jan 28 16:14:34 crc kubenswrapper[4871]: I0128 16:14:34.824458 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gr4gn" event={"ID":"7a336ec0-93b4-4c84-a237-ded6c9e8b78f","Type":"ContainerDied","Data":"840021c80e0cc4502072306587dfb4fe8ce92804de5add1ada50187a27fc686d"} Jan 28 16:14:36 crc kubenswrapper[4871]: I0128 16:14:36.848260 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gr4gn" event={"ID":"7a336ec0-93b4-4c84-a237-ded6c9e8b78f","Type":"ContainerStarted","Data":"5e1aae506140cd824664dda41247bd689213006106cfbb3f64ca9d78bb5a70f7"} Jan 28 16:14:36 crc kubenswrapper[4871]: I0128 16:14:36.872752 4871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gr4gn" podStartSLOduration=4.116918113 podStartE2EDuration="6.872736861s" podCreationTimestamp="2026-01-28 16:14:30 +0000 UTC" firstStartedPulling="2026-01-28 16:14:32.807188781 +0000 UTC m=+3424.703027103" lastFinishedPulling="2026-01-28 16:14:35.563007529 +0000 UTC m=+3427.458845851" observedRunningTime="2026-01-28 16:14:36.868926731 +0000 UTC m=+3428.764765053" watchObservedRunningTime="2026-01-28 16:14:36.872736861 +0000 UTC m=+3428.768575183" Jan 28 16:14:39 crc kubenswrapper[4871]: I0128 16:14:39.904408 4871 scope.go:117] "RemoveContainer" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" Jan 28 16:14:39 crc kubenswrapper[4871]: E0128 16:14:39.905746 4871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7tkqm_openshift-machine-config-operator(25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f)\"" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" podUID="25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f" Jan 28 16:14:41 crc kubenswrapper[4871]: I0128 16:14:41.047176 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gr4gn" Jan 28 16:14:41 crc kubenswrapper[4871]: I0128 16:14:41.047466 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gr4gn" Jan 28 16:14:41 crc kubenswrapper[4871]: I0128 16:14:41.100605 4871 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gr4gn" Jan 28 16:14:41 crc kubenswrapper[4871]: I0128 16:14:41.927482 4871 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gr4gn" Jan 28 16:14:41 crc kubenswrapper[4871]: I0128 16:14:41.978616 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gr4gn"] Jan 28 16:14:43 crc kubenswrapper[4871]: I0128 16:14:43.901170 4871 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gr4gn" podUID="7a336ec0-93b4-4c84-a237-ded6c9e8b78f" containerName="registry-server" containerID="cri-o://5e1aae506140cd824664dda41247bd689213006106cfbb3f64ca9d78bb5a70f7" gracePeriod=2 Jan 28 16:14:44 crc kubenswrapper[4871]: I0128 16:14:44.918020 4871 generic.go:334] "Generic (PLEG): container finished" podID="7a336ec0-93b4-4c84-a237-ded6c9e8b78f" containerID="5e1aae506140cd824664dda41247bd689213006106cfbb3f64ca9d78bb5a70f7" exitCode=0 Jan 28 16:14:44 crc kubenswrapper[4871]: I0128 16:14:44.919410 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gr4gn" event={"ID":"7a336ec0-93b4-4c84-a237-ded6c9e8b78f","Type":"ContainerDied","Data":"5e1aae506140cd824664dda41247bd689213006106cfbb3f64ca9d78bb5a70f7"} Jan 28 16:14:45 crc kubenswrapper[4871]: I0128 16:14:45.091225 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gr4gn" Jan 28 16:14:45 crc kubenswrapper[4871]: I0128 16:14:45.231988 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s95dl\" (UniqueName: \"kubernetes.io/projected/7a336ec0-93b4-4c84-a237-ded6c9e8b78f-kube-api-access-s95dl\") pod \"7a336ec0-93b4-4c84-a237-ded6c9e8b78f\" (UID: \"7a336ec0-93b4-4c84-a237-ded6c9e8b78f\") " Jan 28 16:14:45 crc kubenswrapper[4871]: I0128 16:14:45.232384 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a336ec0-93b4-4c84-a237-ded6c9e8b78f-utilities\") pod \"7a336ec0-93b4-4c84-a237-ded6c9e8b78f\" (UID: \"7a336ec0-93b4-4c84-a237-ded6c9e8b78f\") " Jan 28 16:14:45 crc kubenswrapper[4871]: I0128 16:14:45.232401 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a336ec0-93b4-4c84-a237-ded6c9e8b78f-catalog-content\") pod \"7a336ec0-93b4-4c84-a237-ded6c9e8b78f\" (UID: \"7a336ec0-93b4-4c84-a237-ded6c9e8b78f\") " Jan 28 16:14:45 crc kubenswrapper[4871]: I0128 16:14:45.233528 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a336ec0-93b4-4c84-a237-ded6c9e8b78f-utilities" (OuterVolumeSpecName: "utilities") pod "7a336ec0-93b4-4c84-a237-ded6c9e8b78f" (UID: "7a336ec0-93b4-4c84-a237-ded6c9e8b78f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:14:45 crc kubenswrapper[4871]: I0128 16:14:45.238530 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a336ec0-93b4-4c84-a237-ded6c9e8b78f-kube-api-access-s95dl" (OuterVolumeSpecName: "kube-api-access-s95dl") pod "7a336ec0-93b4-4c84-a237-ded6c9e8b78f" (UID: "7a336ec0-93b4-4c84-a237-ded6c9e8b78f"). InnerVolumeSpecName "kube-api-access-s95dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:14:45 crc kubenswrapper[4871]: I0128 16:14:45.290312 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a336ec0-93b4-4c84-a237-ded6c9e8b78f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a336ec0-93b4-4c84-a237-ded6c9e8b78f" (UID: "7a336ec0-93b4-4c84-a237-ded6c9e8b78f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:14:45 crc kubenswrapper[4871]: I0128 16:14:45.334867 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s95dl\" (UniqueName: \"kubernetes.io/projected/7a336ec0-93b4-4c84-a237-ded6c9e8b78f-kube-api-access-s95dl\") on node \"crc\" DevicePath \"\"" Jan 28 16:14:45 crc kubenswrapper[4871]: I0128 16:14:45.334900 4871 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a336ec0-93b4-4c84-a237-ded6c9e8b78f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:14:45 crc kubenswrapper[4871]: I0128 16:14:45.334910 4871 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a336ec0-93b4-4c84-a237-ded6c9e8b78f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:14:45 crc kubenswrapper[4871]: I0128 16:14:45.929367 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gr4gn" event={"ID":"7a336ec0-93b4-4c84-a237-ded6c9e8b78f","Type":"ContainerDied","Data":"c439f6a2f17cf1cca44c43b9dcb236219d5ca2c391defb586c2d48b1abcb45e6"} Jan 28 16:14:45 crc kubenswrapper[4871]: I0128 16:14:45.930578 4871 scope.go:117] "RemoveContainer" containerID="5e1aae506140cd824664dda41247bd689213006106cfbb3f64ca9d78bb5a70f7" Jan 28 16:14:45 crc kubenswrapper[4871]: I0128 16:14:45.929499 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gr4gn" Jan 28 16:14:45 crc kubenswrapper[4871]: I0128 16:14:45.960155 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gr4gn"] Jan 28 16:14:45 crc kubenswrapper[4871]: I0128 16:14:45.962847 4871 scope.go:117] "RemoveContainer" containerID="840021c80e0cc4502072306587dfb4fe8ce92804de5add1ada50187a27fc686d" Jan 28 16:14:45 crc kubenswrapper[4871]: I0128 16:14:45.968117 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gr4gn"] Jan 28 16:14:45 crc kubenswrapper[4871]: I0128 16:14:45.981434 4871 scope.go:117] "RemoveContainer" containerID="5765ec75d9d68e58ef887b1a8a49a37bc0fbe29cb8f86d93e1fe236012fe35b7" Jan 28 16:14:46 crc kubenswrapper[4871]: I0128 16:14:46.916339 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a336ec0-93b4-4c84-a237-ded6c9e8b78f" path="/var/lib/kubelet/pods/7a336ec0-93b4-4c84-a237-ded6c9e8b78f/volumes" Jan 28 16:14:53 crc kubenswrapper[4871]: I0128 16:14:53.905512 4871 scope.go:117] "RemoveContainer" containerID="b5ad4253e0c0a5691c61df68e53ebcf66ce05ec3cdbac4c3840c10191750374b" Jan 28 16:14:55 crc kubenswrapper[4871]: I0128 16:14:55.007076 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7tkqm" event={"ID":"25fcadbe-ed48-4ec0-a8e2-5e2d5f0eed0f","Type":"ContainerStarted","Data":"13168ae0538f2e216a86689605c7b92aa5455ba2b9d34024a5df8aba2e4445af"} Jan 28 16:15:00 crc kubenswrapper[4871]: I0128 16:15:00.145703 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493615-lh85n"] Jan 28 16:15:00 crc kubenswrapper[4871]: E0128 16:15:00.146579 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a336ec0-93b4-4c84-a237-ded6c9e8b78f" containerName="extract-content" Jan 28 16:15:00 crc kubenswrapper[4871]: I0128 16:15:00.146616 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a336ec0-93b4-4c84-a237-ded6c9e8b78f" containerName="extract-content" Jan 28 16:15:00 crc kubenswrapper[4871]: E0128 16:15:00.146667 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a336ec0-93b4-4c84-a237-ded6c9e8b78f" containerName="extract-utilities" Jan 28 16:15:00 crc kubenswrapper[4871]: I0128 16:15:00.146677 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a336ec0-93b4-4c84-a237-ded6c9e8b78f" containerName="extract-utilities" Jan 28 16:15:00 crc kubenswrapper[4871]: E0128 16:15:00.146687 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a336ec0-93b4-4c84-a237-ded6c9e8b78f" containerName="registry-server" Jan 28 16:15:00 crc kubenswrapper[4871]: I0128 16:15:00.146694 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a336ec0-93b4-4c84-a237-ded6c9e8b78f" containerName="registry-server" Jan 28 16:15:00 crc kubenswrapper[4871]: I0128 16:15:00.146864 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a336ec0-93b4-4c84-a237-ded6c9e8b78f" containerName="registry-server" Jan 28 16:15:00 crc kubenswrapper[4871]: I0128 16:15:00.147380 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-lh85n" Jan 28 16:15:00 crc kubenswrapper[4871]: I0128 16:15:00.149805 4871 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 16:15:00 crc kubenswrapper[4871]: I0128 16:15:00.149861 4871 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 16:15:00 crc kubenswrapper[4871]: I0128 16:15:00.158313 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493615-lh85n"] Jan 28 16:15:00 crc kubenswrapper[4871]: I0128 16:15:00.316466 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh8xt\" (UniqueName: \"kubernetes.io/projected/581c1494-a42d-439d-b4e9-03b445565435-kube-api-access-hh8xt\") pod \"collect-profiles-29493615-lh85n\" (UID: \"581c1494-a42d-439d-b4e9-03b445565435\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-lh85n" Jan 28 16:15:00 crc kubenswrapper[4871]: I0128 16:15:00.316568 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/581c1494-a42d-439d-b4e9-03b445565435-secret-volume\") pod \"collect-profiles-29493615-lh85n\" (UID: \"581c1494-a42d-439d-b4e9-03b445565435\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-lh85n" Jan 28 16:15:00 crc kubenswrapper[4871]: I0128 16:15:00.316778 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/581c1494-a42d-439d-b4e9-03b445565435-config-volume\") pod \"collect-profiles-29493615-lh85n\" (UID: \"581c1494-a42d-439d-b4e9-03b445565435\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-lh85n" Jan 28 16:15:00 crc kubenswrapper[4871]: I0128 16:15:00.418227 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/581c1494-a42d-439d-b4e9-03b445565435-config-volume\") pod \"collect-profiles-29493615-lh85n\" (UID: \"581c1494-a42d-439d-b4e9-03b445565435\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-lh85n" Jan 28 16:15:00 crc kubenswrapper[4871]: I0128 16:15:00.418316 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh8xt\" (UniqueName: \"kubernetes.io/projected/581c1494-a42d-439d-b4e9-03b445565435-kube-api-access-hh8xt\") pod \"collect-profiles-29493615-lh85n\" (UID: \"581c1494-a42d-439d-b4e9-03b445565435\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-lh85n" Jan 28 16:15:00 crc kubenswrapper[4871]: I0128 16:15:00.418369 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/581c1494-a42d-439d-b4e9-03b445565435-secret-volume\") pod \"collect-profiles-29493615-lh85n\" (UID: \"581c1494-a42d-439d-b4e9-03b445565435\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-lh85n" Jan 28 16:15:00 crc kubenswrapper[4871]: I0128 16:15:00.419225 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/581c1494-a42d-439d-b4e9-03b445565435-config-volume\") pod \"collect-profiles-29493615-lh85n\" (UID: \"581c1494-a42d-439d-b4e9-03b445565435\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-lh85n" Jan 28 16:15:00 crc kubenswrapper[4871]: I0128 16:15:00.424668 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/581c1494-a42d-439d-b4e9-03b445565435-secret-volume\") pod \"collect-profiles-29493615-lh85n\" (UID: \"581c1494-a42d-439d-b4e9-03b445565435\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-lh85n" Jan 28 16:15:00 crc kubenswrapper[4871]: I0128 16:15:00.440365 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh8xt\" (UniqueName: \"kubernetes.io/projected/581c1494-a42d-439d-b4e9-03b445565435-kube-api-access-hh8xt\") pod \"collect-profiles-29493615-lh85n\" (UID: \"581c1494-a42d-439d-b4e9-03b445565435\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-lh85n" Jan 28 16:15:00 crc kubenswrapper[4871]: I0128 16:15:00.504405 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-lh85n" Jan 28 16:15:00 crc kubenswrapper[4871]: I0128 16:15:00.927439 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493615-lh85n"] Jan 28 16:15:01 crc kubenswrapper[4871]: I0128 16:15:01.052866 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-lh85n" event={"ID":"581c1494-a42d-439d-b4e9-03b445565435","Type":"ContainerStarted","Data":"d7556ab637e13ef61495fbd762d24c540cb8aed6acc4c2027228a496b49ee5a6"} Jan 28 16:15:02 crc kubenswrapper[4871]: I0128 16:15:02.064137 4871 generic.go:334] "Generic (PLEG): container finished" podID="581c1494-a42d-439d-b4e9-03b445565435" containerID="2a63bf27181237523e75df26c6e7c4ef9b91830739a3e3a18374f229b3a45412" exitCode=0 Jan 28 16:15:02 crc kubenswrapper[4871]: I0128 16:15:02.064234 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-lh85n" event={"ID":"581c1494-a42d-439d-b4e9-03b445565435","Type":"ContainerDied","Data":"2a63bf27181237523e75df26c6e7c4ef9b91830739a3e3a18374f229b3a45412"} Jan 28 16:15:03 crc kubenswrapper[4871]: I0128 16:15:03.385392 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-lh85n" Jan 28 16:15:03 crc kubenswrapper[4871]: I0128 16:15:03.569087 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/581c1494-a42d-439d-b4e9-03b445565435-config-volume\") pod \"581c1494-a42d-439d-b4e9-03b445565435\" (UID: \"581c1494-a42d-439d-b4e9-03b445565435\") " Jan 28 16:15:03 crc kubenswrapper[4871]: I0128 16:15:03.569414 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/581c1494-a42d-439d-b4e9-03b445565435-secret-volume\") pod \"581c1494-a42d-439d-b4e9-03b445565435\" (UID: \"581c1494-a42d-439d-b4e9-03b445565435\") " Jan 28 16:15:03 crc kubenswrapper[4871]: I0128 16:15:03.569467 4871 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh8xt\" (UniqueName: \"kubernetes.io/projected/581c1494-a42d-439d-b4e9-03b445565435-kube-api-access-hh8xt\") pod \"581c1494-a42d-439d-b4e9-03b445565435\" (UID: \"581c1494-a42d-439d-b4e9-03b445565435\") " Jan 28 16:15:03 crc kubenswrapper[4871]: I0128 16:15:03.570623 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/581c1494-a42d-439d-b4e9-03b445565435-config-volume" (OuterVolumeSpecName: "config-volume") pod "581c1494-a42d-439d-b4e9-03b445565435" (UID: "581c1494-a42d-439d-b4e9-03b445565435"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:15:03 crc kubenswrapper[4871]: I0128 16:15:03.582836 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/581c1494-a42d-439d-b4e9-03b445565435-kube-api-access-hh8xt" (OuterVolumeSpecName: "kube-api-access-hh8xt") pod "581c1494-a42d-439d-b4e9-03b445565435" (UID: "581c1494-a42d-439d-b4e9-03b445565435"). InnerVolumeSpecName "kube-api-access-hh8xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:15:03 crc kubenswrapper[4871]: I0128 16:15:03.582833 4871 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/581c1494-a42d-439d-b4e9-03b445565435-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "581c1494-a42d-439d-b4e9-03b445565435" (UID: "581c1494-a42d-439d-b4e9-03b445565435"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:15:03 crc kubenswrapper[4871]: I0128 16:15:03.671129 4871 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/581c1494-a42d-439d-b4e9-03b445565435-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:03 crc kubenswrapper[4871]: I0128 16:15:03.671177 4871 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh8xt\" (UniqueName: \"kubernetes.io/projected/581c1494-a42d-439d-b4e9-03b445565435-kube-api-access-hh8xt\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:03 crc kubenswrapper[4871]: I0128 16:15:03.671190 4871 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/581c1494-a42d-439d-b4e9-03b445565435-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:04 crc kubenswrapper[4871]: I0128 16:15:04.081568 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-lh85n" event={"ID":"581c1494-a42d-439d-b4e9-03b445565435","Type":"ContainerDied","Data":"d7556ab637e13ef61495fbd762d24c540cb8aed6acc4c2027228a496b49ee5a6"} Jan 28 16:15:04 crc kubenswrapper[4871]: I0128 16:15:04.081970 4871 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7556ab637e13ef61495fbd762d24c540cb8aed6acc4c2027228a496b49ee5a6" Jan 28 16:15:04 crc kubenswrapper[4871]: I0128 16:15:04.081617 4871 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-lh85n" Jan 28 16:15:04 crc kubenswrapper[4871]: I0128 16:15:04.470917 4871 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493570-rnmdz"] Jan 28 16:15:04 crc kubenswrapper[4871]: I0128 16:15:04.480639 4871 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493570-rnmdz"] Jan 28 16:15:04 crc kubenswrapper[4871]: I0128 16:15:04.918989 4871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4aacaf4-4e08-4206-9dd2-82ab80507450" path="/var/lib/kubelet/pods/e4aacaf4-4e08-4206-9dd2-82ab80507450/volumes" Jan 28 16:15:13 crc kubenswrapper[4871]: I0128 16:15:13.346149 4871 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dssv8"] Jan 28 16:15:13 crc kubenswrapper[4871]: E0128 16:15:13.349807 4871 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="581c1494-a42d-439d-b4e9-03b445565435" containerName="collect-profiles" Jan 28 16:15:13 crc kubenswrapper[4871]: I0128 16:15:13.350268 4871 state_mem.go:107] "Deleted CPUSet assignment" podUID="581c1494-a42d-439d-b4e9-03b445565435" containerName="collect-profiles" Jan 28 16:15:13 crc kubenswrapper[4871]: I0128 16:15:13.350614 4871 memory_manager.go:354] "RemoveStaleState removing state" podUID="581c1494-a42d-439d-b4e9-03b445565435" containerName="collect-profiles" Jan 28 16:15:13 crc kubenswrapper[4871]: I0128 16:15:13.352136 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dssv8" Jan 28 16:15:13 crc kubenswrapper[4871]: I0128 16:15:13.359539 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dssv8"] Jan 28 16:15:13 crc kubenswrapper[4871]: I0128 16:15:13.524070 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfqxm\" (UniqueName: \"kubernetes.io/projected/c682479e-b1f5-4450-8bda-db760ae2adb9-kube-api-access-mfqxm\") pod \"certified-operators-dssv8\" (UID: \"c682479e-b1f5-4450-8bda-db760ae2adb9\") " pod="openshift-marketplace/certified-operators-dssv8" Jan 28 16:15:13 crc kubenswrapper[4871]: I0128 16:15:13.524143 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c682479e-b1f5-4450-8bda-db760ae2adb9-catalog-content\") pod \"certified-operators-dssv8\" (UID: \"c682479e-b1f5-4450-8bda-db760ae2adb9\") " pod="openshift-marketplace/certified-operators-dssv8" Jan 28 16:15:13 crc kubenswrapper[4871]: I0128 16:15:13.524190 4871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c682479e-b1f5-4450-8bda-db760ae2adb9-utilities\") pod \"certified-operators-dssv8\" (UID: \"c682479e-b1f5-4450-8bda-db760ae2adb9\") " pod="openshift-marketplace/certified-operators-dssv8" Jan 28 16:15:13 crc kubenswrapper[4871]: I0128 16:15:13.625840 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfqxm\" (UniqueName: \"kubernetes.io/projected/c682479e-b1f5-4450-8bda-db760ae2adb9-kube-api-access-mfqxm\") pod \"certified-operators-dssv8\" (UID: \"c682479e-b1f5-4450-8bda-db760ae2adb9\") " pod="openshift-marketplace/certified-operators-dssv8" Jan 28 16:15:13 crc kubenswrapper[4871]: I0128 16:15:13.626193 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c682479e-b1f5-4450-8bda-db760ae2adb9-catalog-content\") pod \"certified-operators-dssv8\" (UID: \"c682479e-b1f5-4450-8bda-db760ae2adb9\") " pod="openshift-marketplace/certified-operators-dssv8" Jan 28 16:15:13 crc kubenswrapper[4871]: I0128 16:15:13.626308 4871 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c682479e-b1f5-4450-8bda-db760ae2adb9-utilities\") pod \"certified-operators-dssv8\" (UID: \"c682479e-b1f5-4450-8bda-db760ae2adb9\") " pod="openshift-marketplace/certified-operators-dssv8" Jan 28 16:15:13 crc kubenswrapper[4871]: I0128 16:15:13.626938 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c682479e-b1f5-4450-8bda-db760ae2adb9-utilities\") pod \"certified-operators-dssv8\" (UID: \"c682479e-b1f5-4450-8bda-db760ae2adb9\") " pod="openshift-marketplace/certified-operators-dssv8" Jan 28 16:15:13 crc kubenswrapper[4871]: I0128 16:15:13.626941 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c682479e-b1f5-4450-8bda-db760ae2adb9-catalog-content\") pod \"certified-operators-dssv8\" (UID: \"c682479e-b1f5-4450-8bda-db760ae2adb9\") " pod="openshift-marketplace/certified-operators-dssv8" Jan 28 16:15:13 crc kubenswrapper[4871]: I0128 16:15:13.649996 4871 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfqxm\" (UniqueName: \"kubernetes.io/projected/c682479e-b1f5-4450-8bda-db760ae2adb9-kube-api-access-mfqxm\") pod \"certified-operators-dssv8\" (UID: \"c682479e-b1f5-4450-8bda-db760ae2adb9\") " pod="openshift-marketplace/certified-operators-dssv8" Jan 28 16:15:13 crc kubenswrapper[4871]: I0128 16:15:13.688573 4871 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dssv8" Jan 28 16:15:14 crc kubenswrapper[4871]: I0128 16:15:14.002528 4871 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dssv8"] Jan 28 16:15:14 crc kubenswrapper[4871]: I0128 16:15:14.162910 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dssv8" event={"ID":"c682479e-b1f5-4450-8bda-db760ae2adb9","Type":"ContainerStarted","Data":"9ef20a0663b2230495371935277a8a0738bc7d842438601adc1b596bb581c096"} Jan 28 16:15:15 crc kubenswrapper[4871]: I0128 16:15:15.172198 4871 generic.go:334] "Generic (PLEG): container finished" podID="c682479e-b1f5-4450-8bda-db760ae2adb9" containerID="5fcdeb759fb57ca7e305c2569003b4817b506a9b4242289c00e97342b77b5c49" exitCode=0 Jan 28 16:15:15 crc kubenswrapper[4871]: I0128 16:15:15.172384 4871 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dssv8" event={"ID":"c682479e-b1f5-4450-8bda-db760ae2adb9","Type":"ContainerDied","Data":"5fcdeb759fb57ca7e305c2569003b4817b506a9b4242289c00e97342b77b5c49"}